aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAaron Davidson <aaron@databricks.com>2014-11-01 13:15:24 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-11-01 13:15:24 -0700
commit59e626c701227634336110e1bc23afd94c535ede (patch)
treecd1874585625a2d66d5fcfff1fa06a4ac36367a0
parent7136719b7d53ee1360abaa5e178ba9f8b00f3da8 (diff)
downloadspark-59e626c701227634336110e1bc23afd94c535ede.tar.gz
spark-59e626c701227634336110e1bc23afd94c535ede.tar.bz2
spark-59e626c701227634336110e1bc23afd94c535ede.zip
[SPARK-4183] Enable NettyBlockTransferService by default
Note that we're turning this on for at least the first part of the QA period as a trial. We want to enable this (and deprecate the NioBlockTransferService) as soon as possible in the hopes that NettyBlockTransferService will be more stable and easier to maintain. We will turn it off if we run into major issues. Author: Aaron Davidson <aaron@databricks.com> Closes #3049 from aarondav/enable-netty and squashes the following commits: bb981cc [Aaron Davidson] [SPARK-4183] Enable NettyBlockTransferService by default
-rw-r--r--core/src/main/scala/org/apache/spark/SparkEnv.scala2
-rw-r--r--docs/configuration.md10
2 files changed, 11 insertions, 1 deletions
diff --git a/core/src/main/scala/org/apache/spark/SparkEnv.scala b/core/src/main/scala/org/apache/spark/SparkEnv.scala
index 557d2f5128..16c5d6648d 100644
--- a/core/src/main/scala/org/apache/spark/SparkEnv.scala
+++ b/core/src/main/scala/org/apache/spark/SparkEnv.scala
@@ -274,7 +274,7 @@ object SparkEnv extends Logging {
val shuffleMemoryManager = new ShuffleMemoryManager(conf)
val blockTransferService =
- conf.get("spark.shuffle.blockTransferService", "nio").toLowerCase match {
+ conf.get("spark.shuffle.blockTransferService", "netty").toLowerCase match {
case "netty" =>
new NettyBlockTransferService(conf)
case "nio" =>
diff --git a/docs/configuration.md b/docs/configuration.md
index 3007706a25..78c4bf332c 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -359,6 +359,16 @@ Apart from these, the following properties are also available, and may be useful
map-side aggregation and there are at most this many reduce partitions.
</td>
</tr>
+<tr>
+ <td><code>spark.shuffle.blockTransferService</code></td>
+ <td>netty</td>
+ <td>
+ Implementation to use for transferring shuffle and cached blocks between executors. There
+ are two implementations available: <code>netty</code> and <code>nio</code>. Netty-based
+ block transfer is intended to be simpler but equally efficient and is the default option
+ starting in 1.2.
+ </td>
+</tr>
</table>
#### Spark UI