From 5d61e051c2ad5955f0101de6f0ecdf5d243e4f5e Mon Sep 17 00:00:00 2001 From: Patrick Wendell Date: Mon, 13 Jan 2014 11:30:09 -0800 Subject: Improvements to external sorting 1. Adds the option of compressing outputs. 2. Adds batching to the serialization to prevent OOM on the read side. 3. Slight renaming of config options. 4. Use Spark's buffer size for reads in addition to writes. --- docs/configuration.md | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) (limited to 'docs') diff --git a/docs/configuration.md b/docs/configuration.md index 40a57c4bc6..350e3145c0 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -116,7 +116,7 @@ Apart from these, the following properties are also available, and may be useful 0.3 Fraction of Java heap to use for aggregation and cogroups during shuffles, if - spark.shuffle.externalSorting is enabled. At any given time, the collective size of + spark.shuffle.external is true. At any given time, the collective size of all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will begin to spill to disk. If spills are often, consider increasing this value at the expense of spark.storage.memoryFraction. @@ -154,6 +154,13 @@ Apart from these, the following properties are also available, and may be useful Whether to compress map output files. Generally a good idea. + + spark.shuffle.external.compress + false + + Whether to compress data spilled during shuffles. + + spark.broadcast.compress true @@ -388,7 +395,7 @@ Apart from these, the following properties are also available, and may be useful - spark.shuffle.externalSorting + spark.shuffle.external true If set to "true", limits the amount of memory used during reduces by spilling data out to disk. This spilling -- cgit v1.2.3