aboutsummaryrefslogtreecommitdiff
path: root/python
diff options
context:
space:
mode:
authorMatthew Farrellee <matt@redhat.com>2014-06-28 18:39:27 -0700
committerAaron Davidson <aaron@databricks.com>2014-06-28 18:39:27 -0700
commit3c104c79d24425786cec0034f269ba19cf465b31 (patch)
treecaa653c37f0ee2f03269c675239d373b362b0cb4 /python
parentb8f2e13aec715e038bd6d1d07b607683f138ac83 (diff)
downloadspark-3c104c79d24425786cec0034f269ba19cf465b31.tar.gz
spark-3c104c79d24425786cec0034f269ba19cf465b31.tar.bz2
spark-3c104c79d24425786cec0034f269ba19cf465b31.zip
[SPARK-1394] Remove SIGCHLD handler in worker subprocess
It should not be the responsibility of the worker subprocess, which does not intentionally fork, to try and cleanup child processes. Doing so is complex and interferes with operations such as platform.system(). If it is desirable to have tighter control over subprocesses, then namespaces should be used and it should be the manager's resposibility to handle cleanup. Author: Matthew Farrellee <matt@redhat.com> Closes #1247 from mattf/SPARK-1394 and squashes the following commits: c36f308 [Matthew Farrellee] [SPARK-1394] Remove SIGCHLD handler in worker subprocess
Diffstat (limited to 'python')
-rw-r--r--python/pyspark/daemon.py1
1 files changed, 1 insertions, 0 deletions
diff --git a/python/pyspark/daemon.py b/python/pyspark/daemon.py
index b2f226a55e..5eb1c63bf2 100644
--- a/python/pyspark/daemon.py
+++ b/python/pyspark/daemon.py
@@ -103,6 +103,7 @@ def worker(listen_sock):
if os.fork() == 0:
# Leave the worker pool
signal.signal(SIGHUP, SIG_DFL)
+ signal.signal(SIGCHLD, SIG_DFL)
listen_sock.close()
# Read the socket using fdopen instead of socket.makefile() because the latter
# seems to be very slow; note that we need to dup() the file descriptor because