aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
authorZongheng Yang <zongheng.y@gmail.com>2014-07-14 13:22:24 -0700
committerMichael Armbrust <michael@databricks.com>2014-07-14 13:22:39 -0700
commit2ec7d7ab751be67a86a048eed85bd9fd36dfaf83 (patch)
treecb40bd41f56ae3278ecd4a84a932c657f0771c62 /sql
parentbaf92a0f2119867b1be540085ebe9f1a1c411ae8 (diff)
downloadspark-2ec7d7ab751be67a86a048eed85bd9fd36dfaf83.tar.gz
spark-2ec7d7ab751be67a86a048eed85bd9fd36dfaf83.tar.bz2
spark-2ec7d7ab751be67a86a048eed85bd9fd36dfaf83.zip
[SPARK-2443][SQL] Fix slow read from partitioned tables
This fix obtains a comparable performance boost as [PR #1390](https://github.com/apache/spark/pull/1390) by moving an array update and deserializer initialization out of a potentially very long loop. Suggested by yhuai. The below results are updated for this fix. ## Benchmarks Generated a local text file with 10M rows of simple key-value pairs. The data is loaded as a table through Hive. Results are obtained on my local machine using hive/console. Without the fix: Type | Non-partitioned | Partitioned (1 part) ------------ | ------------ | ------------- First run | 9.52s end-to-end (1.64s Spark job) | 36.6s (28.3s) Stablized runs | 1.21s (1.18s) | 27.6s (27.5s) With this fix: Type | Non-partitioned | Partitioned (1 part) ------------ | ------------ | ------------- First run | 9.57s (1.46s) | 11.0s (1.69s) Stablized runs | 1.13s (1.10s) | 1.23s (1.19s) Author: Zongheng Yang <zongheng.y@gmail.com> Closes #1408 from concretevitamin/slow-read-2 and squashes the following commits: d86e437 [Zongheng Yang] Move update & initialization out of potentially long loop. (cherry picked from commit d60b09bb60cff106fa0acddebf35714503b20f03) Signed-off-by: Michael Armbrust <michael@databricks.com>
Diffstat (limited to 'sql')
-rw-r--r--sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala10
1 files changed, 7 insertions, 3 deletions
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
index 8cfde46186..c3942578d6 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
@@ -164,13 +164,17 @@ class HadoopTableReader(@transient _tableDesc: TableDesc, @transient sc: HiveCon
hivePartitionRDD.mapPartitions { iter =>
val hconf = broadcastedHiveConf.value.value
val rowWithPartArr = new Array[Object](2)
+
+ // The update and deserializer initialization are intentionally
+ // kept out of the below iter.map loop to save performance.
+ rowWithPartArr.update(1, partValues)
+ val deserializer = localDeserializer.newInstance()
+ deserializer.initialize(hconf, partProps)
+
// Map each tuple to a row object
iter.map { value =>
- val deserializer = localDeserializer.newInstance()
- deserializer.initialize(hconf, partProps)
val deserializedRow = deserializer.deserialize(value)
rowWithPartArr.update(0, deserializedRow)
- rowWithPartArr.update(1, partValues)
rowWithPartArr.asInstanceOf[Object]
}
}