From 1656aae2b4e8b026f8cfe782519f72d32ed2b291 Mon Sep 17 00:00:00 2001 From: lewuathe Date: Sun, 11 Jan 2015 13:50:42 -0800 Subject: [SPARK-5073] spark.storage.memoryMapThreshold have two default value Because major OS page sizes is about 4KB, the default value of spark.storage.memoryMapThreshold is integrated to 2 * 4096 Author: lewuathe Closes #3900 from Lewuathe/integrate-memoryMapThreshold and squashes the following commits: e417acd [lewuathe] [SPARK-5073] Update docs/configuration 834aba4 [lewuathe] [SPARK-5073] Fix style adcea33 [lewuathe] [SPARK-5073] Integrate memory map threshold to 2MB fcce2e5 [lewuathe] [SPARK-5073] spark.storage.memoryMapThreshold have two default value --- docs/configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs') diff --git a/docs/configuration.md b/docs/configuration.md index 2add48569b..f292bfbb7d 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -678,7 +678,7 @@ Apart from these, the following properties are also available, and may be useful spark.storage.memoryMapThreshold - 8192 + 2097152 Size of a block, in bytes, above which Spark memory maps when reading a block from disk. This prevents Spark from memory mapping very small blocks. In general, memory -- cgit v1.2.3