diff options
author | Reynold Xin <rxin@databricks.com> | 2014-11-11 00:25:31 -0800 |
---|---|---|
committer | Aaron Davidson <aaron@databricks.com> | 2014-11-11 00:25:31 -0800 |
commit | ef29a9a9aa85468869eb67ca67b66c65f508d0ee (patch) | |
tree | e669d33eeba5033c22acd29a0c8d7690db61abfe /core/src/test | |
parent | 65083e93ddd552b7d3e4eb09f87c091ef2ae83a2 (diff) | |
download | spark-ef29a9a9aa85468869eb67ca67b66c65f508d0ee.tar.gz spark-ef29a9a9aa85468869eb67ca67b66c65f508d0ee.tar.bz2 spark-ef29a9a9aa85468869eb67ca67b66c65f508d0ee.zip |
[SPARK-4307] Initialize FileDescriptor lazily in FileRegion.
Netty's DefaultFileRegion requires a FileDescriptor in its constructor, which means we need to have a opened file handle. In super large workloads, this could lead to too many open files due to the way these file descriptors are cleaned. This pull request creates a new LazyFileRegion that initializes the FileDescriptor when we are sending data for the first time.
Author: Reynold Xin <rxin@databricks.com>
Author: Reynold Xin <rxin@apache.org>
Closes #3172 from rxin/lazyFD and squashes the following commits:
0bdcdc6 [Reynold Xin] Added reference to Netty's DefaultFileRegion
d4564ae [Reynold Xin] Added SparkConf to the ctor argument of IndexShuffleBlockManager.
6ed369e [Reynold Xin] Code review feedback.
04cddc8 [Reynold Xin] [SPARK-4307] Initialize FileDescriptor lazily in FileRegion.
Diffstat (limited to 'core/src/test')
-rw-r--r-- | core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala b/core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala index 6608ed1e57..9623d66517 100644 --- a/core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala +++ b/core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala @@ -39,7 +39,7 @@ class ExternalShuffleServiceSuite extends ShuffleSuite with BeforeAndAfterAll { override def beforeAll() { val transportConf = SparkTransportConf.fromSparkConf(conf) - rpcHandler = new ExternalShuffleBlockHandler() + rpcHandler = new ExternalShuffleBlockHandler(transportConf) val transportContext = new TransportContext(transportConf, rpcHandler) server = transportContext.createServer() |