diff options
author | Patrick Wendell <pwendell@gmail.com> | 2014-11-01 15:18:58 -0700 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2014-11-01 15:18:58 -0700 |
commit | 7894de276b8d0b0e4efc654d0b254fc2a6f6077c (patch) | |
tree | 5a73de78434ac4a92ae4e7b7e609b9eff6f5d850 | |
parent | ad0fde10b2285e780349be5a8f333db0974a502f (diff) | |
download | spark-7894de276b8d0b0e4efc654d0b254fc2a6f6077c.tar.gz spark-7894de276b8d0b0e4efc654d0b254fc2a6f6077c.tar.bz2 spark-7894de276b8d0b0e4efc654d0b254fc2a6f6077c.zip |
Revert "[SPARK-4183] Enable NettyBlockTransferService by default"
This reverts commit 59e626c701227634336110e1bc23afd94c535ede.
-rw-r--r-- | core/src/main/scala/org/apache/spark/SparkEnv.scala | 2 | ||||
-rw-r--r-- | docs/configuration.md | 10 |
2 files changed, 1 insertions, 11 deletions
diff --git a/core/src/main/scala/org/apache/spark/SparkEnv.scala b/core/src/main/scala/org/apache/spark/SparkEnv.scala index e2f13accdf..7fb2b91377 100644 --- a/core/src/main/scala/org/apache/spark/SparkEnv.scala +++ b/core/src/main/scala/org/apache/spark/SparkEnv.scala @@ -274,7 +274,7 @@ object SparkEnv extends Logging { val shuffleMemoryManager = new ShuffleMemoryManager(conf) val blockTransferService = - conf.get("spark.shuffle.blockTransferService", "netty").toLowerCase match { + conf.get("spark.shuffle.blockTransferService", "nio").toLowerCase match { case "netty" => new NettyBlockTransferService(conf) case "nio" => diff --git a/docs/configuration.md b/docs/configuration.md index 78c4bf332c..3007706a25 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -359,16 +359,6 @@ Apart from these, the following properties are also available, and may be useful map-side aggregation and there are at most this many reduce partitions. </td> </tr> -<tr> - <td><code>spark.shuffle.blockTransferService</code></td> - <td>netty</td> - <td> - Implementation to use for transferring shuffle and cached blocks between executors. There - are two implementations available: <code>netty</code> and <code>nio</code>. Netty-based - block transfer is intended to be simpler but equally efficient and is the default option - starting in 1.2. - </td> -</tr> </table> #### Spark UI |