aboutsummaryrefslogtreecommitdiff
path: root/docs/configuration.md
diff options
context:
space:
mode:
authorDavies Liu <davies.liu@gmail.com>2014-09-13 16:22:04 -0700
committerJosh Rosen <joshrosen@apache.org>2014-09-13 16:22:04 -0700
commit2aea0da84c58a179917311290083456dfa043db7 (patch)
tree6cda208e50f24c31883f1fdf2f51b7a6a8399ff1 /docs/configuration.md
parent0f8c4edf4e750e3d11da27cc22c40b0489da7f37 (diff)
downloadspark-2aea0da84c58a179917311290083456dfa043db7.tar.gz
spark-2aea0da84c58a179917311290083456dfa043db7.tar.bz2
spark-2aea0da84c58a179917311290083456dfa043db7.zip
[SPARK-3030] [PySpark] Reuse Python worker
Reuse Python worker to avoid the overhead of fork() Python process for each tasks. It also tracks the broadcasts for each worker, avoid sending repeated broadcasts. This can reduce the time for dummy task from 22ms to 13ms (-40%). It can help to reduce the latency for Spark Streaming. For a job with broadcast (43M after compress): ``` b = sc.broadcast(set(range(30000000))) print sc.parallelize(range(24000), 100).filter(lambda x: x in b.value).count() ``` It will finish in 281s without reused worker, and it will finish in 65s with reused worker(4 CPUs). After reusing the worker, it can save about 9 seconds for transfer and deserialize the broadcast for each tasks. It's enabled by default, could be disabled by `spark.python.worker.reuse = false`. Author: Davies Liu <davies.liu@gmail.com> Closes #2259 from davies/reuse-worker and squashes the following commits: f11f617 [Davies Liu] Merge branch 'master' into reuse-worker 3939f20 [Davies Liu] fix bug in serializer in mllib cf1c55e [Davies Liu] address comments 3133a60 [Davies Liu] fix accumulator with reused worker 760ab1f [Davies Liu] do not reuse worker if there are any exceptions 7abb224 [Davies Liu] refactor: sychronized with itself ac3206e [Davies Liu] renaming 8911f44 [Davies Liu] synchronized getWorkerBroadcasts() 6325fc1 [Davies Liu] bugfix: bid >= 0 e0131a2 [Davies Liu] fix name of config 583716e [Davies Liu] only reuse completed and not interrupted worker ace2917 [Davies Liu] kill python worker after timeout 6123d0f [Davies Liu] track broadcasts for each worker 8d2f08c [Davies Liu] reuse python worker
Diffstat (limited to 'docs/configuration.md')
-rw-r--r--docs/configuration.md10
1 files changed, 10 insertions, 0 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index 36178efb97..af16489a44 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -207,6 +207,16 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
+ <td><code>spark.python.worker.reuse</code></td>
+ <td>true</td>
+ <td>
+ Reuse Python worker or not. If yes, it will use a fixed number of Python workers,
+ does not need to fork() a Python process for every tasks. It will be very useful
+ if there is large broadcast, then the broadcast will not be needed to transfered
+ from JVM to Python worker for every task.
+ </td>
+</tr>
+<tr>
<td><code>spark.executorEnv.[EnvironmentVariableName]</code></td>
<td>(none)</td>
<td>