aboutsummaryrefslogtreecommitdiff
path: root/core/src/main
diff options
context:
space:
mode:
authorAndrew Or <andrewor14@gmail.com>2014-09-23 14:00:33 -0700
committerAndrew Or <andrewor14@gmail.com>2014-09-23 14:07:56 -0700
commit5bbc621f62ff5d88e1e5894790b418f07a0b8208 (patch)
tree38383f6ec2af3ce7ae892609a07d8a5907d62f3d /core/src/main
parentffd97be32a53d033ed5ca7545b6d84f0794774cf (diff)
downloadspark-5bbc621f62ff5d88e1e5894790b418f07a0b8208.tar.gz
spark-5bbc621f62ff5d88e1e5894790b418f07a0b8208.tar.bz2
spark-5bbc621f62ff5d88e1e5894790b418f07a0b8208.zip
[SPARK-3653] Respect SPARK_*_MEMORY for cluster mode
`SPARK_DRIVER_MEMORY` was only used to start the `SparkSubmit` JVM, which becomes the driver only in client mode but not cluster mode. In cluster mode, this property is simply not propagated to the worker nodes. `SPARK_EXECUTOR_MEMORY` is picked up from `SparkContext`, but in cluster mode the driver runs on one of the worker machines, where this environment variable may not be set. Author: Andrew Or <andrewor14@gmail.com> Closes #2500 from andrewor14/memory-env-vars and squashes the following commits: 6217b38 [Andrew Or] Respect SPARK_*_MEMORY for cluster mode Conflicts: core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
Diffstat (limited to 'core/src/main')
-rw-r--r--core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala4
1 files changed, 4 insertions, 0 deletions
diff --git a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
index d545f58c5d..2df25546ed 100644
--- a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
@@ -57,6 +57,10 @@ private[spark] class SparkSubmitArguments(args: Seq[String]) {
var pyFiles: String = null
val sparkProperties: HashMap[String, String] = new HashMap[String, String]()
+ // Respect SPARK_*_MEMORY for cluster mode
+ driverMemory = sys.env.get("SPARK_DRIVER_MEMORY").orNull
+ executorMemory = sys.env.get("SPARK_EXECUTOR_MEMORY").orNull
+
parseOpts(args.toList)
mergeSparkProperties()
checkRequiredArguments()