aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark
diff options
context:
space:
mode:
authorXiangrui Meng <meng@databricks.com>2014-11-18 16:25:44 -0800
committerXiangrui Meng <meng@databricks.com>2014-11-18 16:25:57 -0800
commitbf76164f1090892544983f753d4b7b16903a6135 (patch)
tree53423acf4b0cb9144e3495556956d3fec0843311 /python/pyspark
parentbb7a173d95094b63981724c381f68a885e514cd4 (diff)
downloadspark-bf76164f1090892544983f753d4b7b16903a6135.tar.gz
spark-bf76164f1090892544983f753d4b7b16903a6135.tar.bz2
spark-bf76164f1090892544983f753d4b7b16903a6135.zip
[SPARK-4433] fix a racing condition in zipWithIndex
Spark hangs with the following code: ~~~ sc.parallelize(1 to 10).zipWithIndex.repartition(10).count() ~~~ This is because ZippedWithIndexRDD triggers a job in getPartitions and it causes a deadlock in DAGScheduler.getPreferredLocs (synced). The fix is to compute `startIndices` during construction. This should be applied to branch-1.0, branch-1.1, and branch-1.2. pwendell Author: Xiangrui Meng <meng@databricks.com> Closes #3291 from mengxr/SPARK-4433 and squashes the following commits: c284d9f [Xiangrui Meng] fix a racing condition in zipWithIndex (cherry picked from commit bb46046154a438df4db30a0e1fd557bd3399ee7b) Signed-off-by: Xiangrui Meng <meng@databricks.com>
Diffstat (limited to 'python/pyspark')
0 files changed, 0 insertions, 0 deletions