aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
authorMichael Armbrust <michael@databricks.com>2016-02-22 15:27:29 -0800
committerMichael Armbrust <michael@databricks.com>2016-02-22 15:27:29 -0800
commit173aa949c309ff7a7a03e9d762b9108542219a95 (patch)
tree8dc2978ccaa7c4011aeaeb5c358a49f055a44ef6 /sql
parent4a91806a45a48432c3ea4c2aaa553177952673e9 (diff)
downloadspark-173aa949c309ff7a7a03e9d762b9108542219a95.tar.gz
spark-173aa949c309ff7a7a03e9d762b9108542219a95.tar.bz2
spark-173aa949c309ff7a7a03e9d762b9108542219a95.zip
[SPARK-12546][SQL] Change default number of open parquet files
A common problem that users encounter with Spark 1.6.0 is that writing to a partitioned parquet table OOMs. The root cause is that parquet allocates a significant amount of memory that is not accounted for by our own mechanisms. As a workaround, we can ensure that only a single file is open per task unless the user explicitly asks for more. Author: Michael Armbrust <michael@databricks.com> Closes #11308 from marmbrus/parquetWriteOOM.
Diffstat (limited to 'sql')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala2
1 files changed, 1 insertions, 1 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala b/sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala
index 61a7b9935a..a601c87fc9 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala
@@ -430,7 +430,7 @@ private[spark] object SQLConf {
val PARTITION_MAX_FILES =
intConf("spark.sql.sources.maxConcurrentWrites",
- defaultValue = Some(5),
+ defaultValue = Some(1),
doc = "The maximum number of concurrent files to open before falling back on sorting when " +
"writing out files using dynamic partitioning.")