aboutsummaryrefslogtreecommitdiff
path: root/core
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-09-23 16:43:21 -0700
committerReynold Xin <rxin@databricks.com>2015-09-23 16:43:21 -0700
commit9952217749118ae78fe794ca11e1c4a87a4ae8ba (patch)
treecf71cc84eb34acdeade45cc8be3642db4faa8d54 /core
parent067afb4e9bb227f159bcbc2aafafce9693303ea9 (diff)
downloadspark-9952217749118ae78fe794ca11e1c4a87a4ae8ba.tar.gz
spark-9952217749118ae78fe794ca11e1c4a87a4ae8ba.tar.bz2
spark-9952217749118ae78fe794ca11e1c4a87a4ae8ba.zip
[SPARK-10731] [SQL] Delegate to Scala's DataFrame.take implementation in Python DataFrame.
Python DataFrame.head/take now requires scanning all the partitions. This pull request changes them to delegate the actual implementation to Scala DataFrame (by calling DataFrame.take). This is more of a hack for fixing this issue in 1.5.1. A more proper fix is to change executeCollect and executeTake to return InternalRow rather than Row, and thus eliminate the extra round-trip conversion. Author: Reynold Xin <rxin@databricks.com> Closes #8876 from rxin/SPARK-10731.
Diffstat (limited to 'core')
-rw-r--r--core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala2
1 files changed, 1 insertions, 1 deletions
diff --git a/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala b/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
index 19be093903..8464b578ed 100644
--- a/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
+++ b/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
@@ -633,7 +633,7 @@ private[spark] object PythonRDD extends Logging {
*
* The thread will terminate after all the data are sent or any exceptions happen.
*/
- private def serveIterator[T](items: Iterator[T], threadName: String): Int = {
+ def serveIterator[T](items: Iterator[T], threadName: String): Int = {
val serverSocket = new ServerSocket(0, 1, InetAddress.getByName("localhost"))
// Close the socket if no connection in 3 seconds
serverSocket.setSoTimeout(3000)