diff options
author | Matei Zaharia <matei@databricks.com> | 2015-07-22 15:28:09 -0700 |
---|---|---|
committer | Matei Zaharia <matei@databricks.com> | 2015-07-22 15:28:09 -0700 |
commit | fe26584a1f5b472fb2e87aa7259aec822a619a3b (patch) | |
tree | d568c3aeda422e91d2b3d1a9335605da55be73fa /core/src/test/java | |
parent | 1aca9c13c144fa336af6afcfa666128bf77c49d4 (diff) | |
download | spark-fe26584a1f5b472fb2e87aa7259aec822a619a3b.tar.gz spark-fe26584a1f5b472fb2e87aa7259aec822a619a3b.tar.bz2 spark-fe26584a1f5b472fb2e87aa7259aec822a619a3b.zip |
[SPARK-9244] Increase some memory defaults
There are a few memory limits that people hit often and that we could
make higher, especially now that memory sizes have grown.
- spark.akka.frameSize: This defaults at 10 but is often hit for map
output statuses in large shuffles. This memory is not fully allocated
up-front, so we can just make this larger and still not affect jobs
that never sent a status that large. We increase it to 128.
- spark.executor.memory: Defaults at 512m, which is really small. We
increase it to 1g.
Author: Matei Zaharia <matei@databricks.com>
Closes #7586 from mateiz/configs and squashes the following commits:
ce0038a [Matei Zaharia] [SPARK-9244] Increase some memory defaults
Diffstat (limited to 'core/src/test/java')
-rw-r--r-- | core/src/test/java/org/apache/spark/JavaAPISuite.java | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/core/src/test/java/org/apache/spark/JavaAPISuite.java b/core/src/test/java/org/apache/spark/JavaAPISuite.java index 1b04a3b1cf..e948ca3347 100644 --- a/core/src/test/java/org/apache/spark/JavaAPISuite.java +++ b/core/src/test/java/org/apache/spark/JavaAPISuite.java @@ -1783,7 +1783,7 @@ public class JavaAPISuite implements Serializable { // Stop the context created in setUp() and start a local-cluster one, to force usage of the // assembly. sc.stop(); - JavaSparkContext localCluster = new JavaSparkContext("local-cluster[1,1,512]", "JavaAPISuite"); + JavaSparkContext localCluster = new JavaSparkContext("local-cluster[1,1,1024]", "JavaAPISuite"); try { JavaRDD<Integer> rdd1 = localCluster.parallelize(Arrays.asList(1, 2, null), 3); JavaRDD<Optional<Integer>> rdd2 = rdd1.map( |