From 85200c09adc6eb98fadb8505f55cb44e3d8b3390 Mon Sep 17 00:00:00 2001 From: felixcheung Date: Thu, 21 Jan 2016 16:30:20 +0100 Subject: [SPARK-12534][DOC] update documentation to list command line equivalent to properties Several Spark properties equivalent to Spark submit command line options are missing. Author: felixcheung Closes #10491 from felixcheung/sparksubmitdoc. --- docs/running-on-yarn.md | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) (limited to 'docs/running-on-yarn.md') diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md index a148c867eb..ad66b9f64a 100644 --- a/docs/running-on-yarn.md +++ b/docs/running-on-yarn.md @@ -113,6 +113,19 @@ If you need a reference to the proper location to put log files in the YARN so t Use lower-case suffixes, e.g. k, m, g, t, and p, for kibi-, mebi-, gibi-, tebi-, and pebibytes, respectively. + + spark.driver.memory + 1g + + Amount of memory to use for the driver process, i.e. where SparkContext is initialized. + (e.g. 1g, 2g). + +
Note: In client mode, this config must not be set through the SparkConf + directly in your application, because the driver JVM has already started at that point. + Instead, please set this through the --driver-memory command line option + or in your default properties file. + + spark.driver.cores 1 @@ -202,6 +215,13 @@ If you need a reference to the proper location to put log files in the YARN so t Comma-separated list of files to be placed in the working directory of each executor. + + spark.executor.cores + 1 in YARN mode, all the available cores on the worker in standalone mode. + + The number of cores to use on each executor. For YARN and standalone mode only. + + spark.executor.instances 2 @@ -209,6 +229,13 @@ If you need a reference to the proper location to put log files in the YARN so t The number of executors. Note that this property is incompatible with spark.dynamicAllocation.enabled. If both spark.dynamicAllocation.enabled and spark.executor.instances are specified, dynamic allocation is turned off and the specified number of spark.executor.instances is used. + + spark.executor.memory + 1g + + Amount of memory to use per executor process (e.g. 2g, 8g). + + spark.yarn.executor.memoryOverhead executorMemory * 0.10, with minimum of 384 -- cgit v1.2.3