diff options
author | Davies Liu <davies@databricks.com> | 2016-02-23 12:55:44 -0800 |
---|---|---|
committer | Davies Liu <davies.liu@gmail.com> | 2016-02-23 12:55:44 -0800 |
commit | c481bdf512f09060c9b9f341a5ce9fce00427d08 (patch) | |
tree | 17e1d847f0e1e908316eb719b452da1c325c10d8 /dev/test-dependencies.sh | |
parent | c5bfe5d2a22e0e66b27aa28a19785c3aac5d9f2e (diff) | |
download | spark-c481bdf512f09060c9b9f341a5ce9fce00427d08.tar.gz spark-c481bdf512f09060c9b9f341a5ce9fce00427d08.tar.bz2 spark-c481bdf512f09060c9b9f341a5ce9fce00427d08.zip |
[SPARK-13329] [SQL] considering output for statistics of logical plan
The current implementation of statistics of UnaryNode does not considering output (for example, Project may product much less columns than it's child), we should considering it to have a better guess.
We usually only join with few columns from a parquet table, the size of projected plan could be much smaller than the original parquet files. Having a better guess of size help we choose between broadcast join or sort merge join.
After this PR, I saw a few queries choose broadcast join other than sort merge join without turning spark.sql.autoBroadcastJoinThreshold for every query, ended up with about 6-8X improvements on end-to-end time.
We use `defaultSize` of DataType to estimate the size of a column, currently For DecimalType/StringType/BinaryType and UDT, we are over-estimate too much (4096 Bytes), so this PR change them to some more reasonable values. Here are the new defaultSize for them:
DecimalType: 8 or 16 bytes, based on the precision
StringType: 20 bytes
BinaryType: 100 bytes
UDF: default size of SQL type
These numbers are not perfect (hard to have a perfect number for them), but should be better than 4096.
Author: Davies Liu <davies@databricks.com>
Closes #11210 from davies/statics.
Diffstat (limited to 'dev/test-dependencies.sh')
0 files changed, 0 insertions, 0 deletions