aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/context.py
diff options
context:
space:
mode:
authorAaron Davidson <aaron@databricks.com>2014-05-07 09:48:31 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-05-07 09:48:31 -0700
commit3308722ca03f2bfa792e9a2cff9c894b967983d9 (patch)
tree78739ecc5b732316db6634cf917fd6e5de0225bc /python/pyspark/context.py
parent967635a2425a769b932eea0984fe697d6721cab0 (diff)
downloadspark-3308722ca03f2bfa792e9a2cff9c894b967983d9.tar.gz
spark-3308722ca03f2bfa792e9a2cff9c894b967983d9.tar.bz2
spark-3308722ca03f2bfa792e9a2cff9c894b967983d9.zip
SPARK-1579: Clean up PythonRDD and avoid swallowing IOExceptions
This patch includes several cleanups to PythonRDD, focused around fixing [SPARK-1579](https://issues.apache.org/jira/browse/SPARK-1579) cleanly. Listed in order of approximate importance: - The Python daemon waits for Spark to close the socket before exiting, in order to avoid causing spurious IOExceptions in Spark's `PythonRDD::WriterThread`. - Removes the Python Monitor Thread, which polled for task cancellations in order to kill the Python worker. Instead, we do this in the onCompleteCallback, since this is guaranteed to be called during cancellation. - Adds a "completed" variable to TaskContext to avoid the issue noted in [SPARK-1019](https://issues.apache.org/jira/browse/SPARK-1019), where onCompleteCallbacks may be execution-order dependent. Along with this, I removed the "context.interrupted = true" flag in the onCompleteCallback. - Extracts PythonRDD::WriterThread to its own class. Since this patch provides an alternative solution to [SPARK-1019](https://issues.apache.org/jira/browse/SPARK-1019), I did test it with ``` sc.textFile("latlon.tsv").take(5) ``` many times without error. Additionally, in order to test the unswallowed exceptions, I performed ``` sc.textFile("s3n://<big file>").count() ``` and cut my internet during execution. Prior to this patch, we got the "stdin writer exited early" message, which was unhelpful. Now, we get the SocketExceptions propagated through Spark to the user and get proper (though unsuccessful) task retries. Author: Aaron Davidson <aaron@databricks.com> Closes #640 from aarondav/pyspark-io and squashes the following commits: b391ff8 [Aaron Davidson] Detect "clean socket shutdowns" and stop waiting on the socket c0c49da [Aaron Davidson] SPARK-1579: Clean up PythonRDD and avoid swallowing IOExceptions
Diffstat (limited to 'python/pyspark/context.py')
-rw-r--r--python/pyspark/context.py2
1 files changed, 1 insertions, 1 deletions
diff --git a/python/pyspark/context.py b/python/pyspark/context.py
index c7dc85ea03..cac133d0fc 100644
--- a/python/pyspark/context.py
+++ b/python/pyspark/context.py
@@ -453,7 +453,7 @@ class SparkContext(object):
>>> lock = threading.Lock()
>>> def map_func(x):
... sleep(100)
- ... return x * x
+ ... raise Exception("Task should have been cancelled")
>>> def start_job(x):
... global result
... try: