diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/configuration.md | 14 | ||||
-rw-r--r-- | docs/sql-programming-guide.md | 7 |
2 files changed, 3 insertions, 18 deletions
diff --git a/docs/configuration.md b/docs/configuration.md index 3700051efb..5ec097c78a 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -69,7 +69,7 @@ val sc = new SparkContext(new SparkConf()) Then, you can supply configuration values at runtime: {% highlight bash %} -./bin/spark-submit --name "My app" --master local[4] --conf spark.shuffle.spill=false +./bin/spark-submit --name "My app" --master local[4] --conf spark.eventLog.enabled=false --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar {% endhighlight %} @@ -449,8 +449,8 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.shuffle.memoryFraction</code></td> <td>0.2</td> <td> - Fraction of Java heap to use for aggregation and cogroups during shuffles, if - <code>spark.shuffle.spill</code> is true. At any given time, the collective size of + Fraction of Java heap to use for aggregation and cogroups during shuffles. + At any given time, the collective size of all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will begin to spill to disk. If spills are often, consider increasing this value at the expense of <code>spark.storage.memoryFraction</code>. @@ -484,14 +484,6 @@ Apart from these, the following properties are also available, and may be useful </td> </tr> <tr> - <td><code>spark.shuffle.spill</code></td> - <td>true</td> - <td> - If set to "true", limits the amount of memory used during reduces by spilling data out to disk. - This spilling threshold is specified by <code>spark.shuffle.memoryFraction</code>. - </td> -</tr> -<tr> <td><code>spark.shuffle.spill.compress</code></td> <td>true</td> <td> diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 82d4243cc6..7ae9244c27 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -1936,13 +1936,6 @@ that these options will be deprecated in future release as more optimizations ar Configures the number of partitions to use when shuffling data for joins or aggregations. </td> </tr> - <tr> - <td><code>spark.sql.planner.externalSort</code></td> - <td>true</td> - <td> - When true, performs sorts spilling to disk as needed otherwise sort each partition in memory. - </td> - </tr> </table> # Distributed SQL Engine |