diff options
author | Liang-Chi Hsieh <simonh@tw.ibm.com> | 2016-07-27 21:14:20 +0800 |
---|---|---|
committer | Wenchen Fan <wenchen@databricks.com> | 2016-07-27 21:14:20 +0800 |
commit | 045fc3606698b017a4addf5277808883e6fe76b6 (patch) | |
tree | 55f1d06d65b3ab951f94a7d2452213fc37030497 | |
parent | 3c3371bbd6361011b138cce88f6396a2aa4e2cb9 (diff) | |
download | spark-045fc3606698b017a4addf5277808883e6fe76b6.tar.gz spark-045fc3606698b017a4addf5277808883e6fe76b6.tar.bz2 spark-045fc3606698b017a4addf5277808883e6fe76b6.zip |
[MINOR][DOC][SQL] Fix two documents regarding size in bytes
## What changes were proposed in this pull request?
Fix two places in SQLConf documents regarding size in bytes and statistics.
## How was this patch tested?
No. Just change document.
Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
Closes #14341 from viirya/fix-doc-size-in-bytes.
-rw-r--r-- | sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala | 12 |
1 files changed, 7 insertions, 5 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala index 12a11ad746..2286919f7a 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala @@ -109,7 +109,9 @@ object SQLConf { .doc("Configures the maximum size in bytes for a table that will be broadcast to all worker " + "nodes when performing a join. By setting this value to -1 broadcasting can be disabled. " + "Note that currently statistics are only supported for Hive Metastore tables where the " + - "command<code>ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan</code> has been run.") + "command<code>ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan</code> has been " + + "run, and file-based data source tables where the statistics are computed directly on " + + "the files of data.") .longConf .createWithDefault(10L * 1024 * 1024) @@ -122,10 +124,10 @@ object SQLConf { val DEFAULT_SIZE_IN_BYTES = SQLConfigBuilder("spark.sql.defaultSizeInBytes") .internal() - .doc("The default table size used in query planning. By default, it is set to a larger " + - "value than `spark.sql.autoBroadcastJoinThreshold` to be more conservative. That is to say " + - "by default the optimizer will not choose to broadcast a table unless it knows for sure " + - "its size is small enough.") + .doc("The default table size used in query planning. By default, it is set to Long.MaxValue " + + "which is larger than `spark.sql.autoBroadcastJoinThreshold` to be more conservative. " + + "That is to say by default the optimizer will not choose to broadcast a table unless it " + + "knows for sure its size is small enough.") .longConf .createWithDefault(-1) |