aboutsummaryrefslogtreecommitdiff
path: root/docs/tuning.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-06-30 15:38:58 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-06-30 15:46:46 -0700
commit03d0b858c807339b4221bedffa29ac76eef5352e (patch)
tree3235e3d155dfc6eb0b55a36046492f653ab41346 /docs/tuning.md
parentccfe953a4db25c920157554a2cd820f8afb41ca3 (diff)
downloadspark-03d0b858c807339b4221bedffa29ac76eef5352e.tar.gz
spark-03d0b858c807339b4221bedffa29ac76eef5352e.tar.bz2
spark-03d0b858c807339b4221bedffa29ac76eef5352e.zip
Made use of spark.executor.memory setting consistent and documented it
Conflicts: core/src/main/scala/spark/SparkContext.scala
Diffstat (limited to 'docs/tuning.md')
-rw-r--r--docs/tuning.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/docs/tuning.md b/docs/tuning.md
index 32c7ab86e9..5ffca54481 100644
--- a/docs/tuning.md
+++ b/docs/tuning.md
@@ -157,9 +157,9 @@ their work directories), *not* on your driver program.
**Cache Size Tuning**
-One important configuration parameter for GC is the amount of memory that should be used for
-caching RDDs. By default, Spark uses 66% of the configured memory (`SPARK_MEM`) to cache RDDs. This means that
- 33% of memory is available for any objects created during task execution.
+One important configuration parameter for GC is the amount of memory that should be used for caching RDDs.
+By default, Spark uses 66% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
+cache RDDs. This means that 33% of memory is available for any objects created during task execution.
In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of
memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can call