From 9d225a91043ac92a0e727ba281b10c250a945614 Mon Sep 17 00:00:00 2001 From: Chen Chao Date: Mon, 3 Mar 2014 14:41:25 -0800 Subject: update proportion of memory The default value of "spark.storage.memoryFraction" has been changed from 0.66 to 0.6 . So it should be 60% of the memory to cache while 40% used for task execution. Author: Chen Chao Closes #66 from CrazyJvm/master and squashes the following commits: 0f84d86 [Chen Chao] update proportion of memory --- docs/tuning.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tuning.md b/docs/tuning.md index 704778681c..26ff1325bb 100644 --- a/docs/tuning.md +++ b/docs/tuning.md @@ -163,8 +163,8 @@ their work directories), *not* on your driver program. **Cache Size Tuning** One important configuration parameter for GC is the amount of memory that should be used for caching RDDs. -By default, Spark uses 66% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to -cache RDDs. This means that 33% of memory is available for any objects created during task execution. +By default, Spark uses 60% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to +cache RDDs. This means that 40% of memory is available for any objects created during task execution. In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can call -- cgit v1.2.3