aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/context.py
diff options
context:
space:
mode:
authorHossein <hossein@databricks.com>2014-07-26 01:04:56 -0700
committerMatei Zaharia <matei@databricks.com>2014-07-26 01:04:56 -0700
commit66f26a4610aede57322cb7e193a50aecb6c57d22 (patch)
treee45cb6dbf2a6970f0b8b341a0384352b2106122d /python/pyspark/context.py
parentcf3e9fd84dc64f8a57ecbcfdd6b22f5492d41bd7 (diff)
downloadspark-66f26a4610aede57322cb7e193a50aecb6c57d22.tar.gz
spark-66f26a4610aede57322cb7e193a50aecb6c57d22.tar.bz2
spark-66f26a4610aede57322cb7e193a50aecb6c57d22.zip
[SPARK-2696] Reduce default value of spark.serializer.objectStreamReset
The current default value of spark.serializer.objectStreamReset is 10,000. When trying to re-partition (e.g., to 64 partitions) a large file (e.g., 500MB), containing 1MB records, the serializer will cache 10000 x 1MB x 64 ~= 640 GB which will cause out of memory errors. This patch sets the default value to a more reasonable default value (100). Author: Hossein <hossein@databricks.com> Closes #1595 from falaki/objectStreamReset and squashes the following commits: 650a935 [Hossein] Updated documentation 1aa0df8 [Hossein] Reduce default value of spark.serializer.objectStreamReset
Diffstat (limited to 'python/pyspark/context.py')
0 files changed, 0 insertions, 0 deletions