From 23a9e62bad9669e9ff5dc4bd714f58d12f9be0b5 Mon Sep 17 00:00:00 2001 From: Josh Rosen Date: Thu, 10 Dec 2015 15:29:04 -0800 Subject: [SPARK-12251] Document and improve off-heap memory configurations This patch adds documentation for Spark configurations that affect off-heap memory and makes some naming and validation improvements for those configs. - Change `spark.memory.offHeapSize` to `spark.memory.offHeap.size`. This is fine because this configuration has not shipped in any Spark release yet (it's new in Spark 1.6). - Deprecated `spark.unsafe.offHeap` in favor of a new `spark.memory.offHeap.enabled` configuration. The motivation behind this change is to gather all memory-related configurations under the same prefix. - Add a check which prevents users from setting `spark.memory.offHeap.enabled=true` when `spark.memory.offHeap.size == 0`. After SPARK-11389 (#9344), which was committed in Spark 1.6, Spark enforces a hard limit on the amount of off-heap memory that it will allocate to tasks. As a result, enabling off-heap execution memory without setting `spark.memory.offHeap.size` will lead to immediate OOMs. The new configuration validation makes this scenario easier to diagnose, helping to avoid user confusion. - Document these configurations on the configuration page. Author: Josh Rosen Closes #10237 from JoshRosen/SPARK-12251. --- docs/configuration.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'docs/configuration.md') diff --git a/docs/configuration.md b/docs/configuration.md index 873a2d0b30..55cf4b2dac 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -738,6 +738,22 @@ Apart from these, the following properties are also available, and may be useful this description. + + spark.memory.offHeap.enabled + true + + If true, Spark will attempt to use off-heap memory for certain operations. If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive. + + + + spark.memory.offHeap.size + 0 + + The absolute amount of memory which can be used for off-heap allocation. + This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly. + This must be set to a positive value when spark.memory.offHeap.enabled=true. + + spark.memory.useLegacyMode false -- cgit v1.2.3