aboutsummaryrefslogtreecommitdiff
path: root/core/src
diff options
context:
space:
mode:
authorLiang-Chi Hsieh <simonh@tw.ibm.com>2016-05-24 10:10:41 -0700
committerDavies Liu <davies.liu@gmail.com>2016-05-24 10:10:41 -0700
commit695d9a0fd461070ee2684b2210fb69d0b6ed1a95 (patch)
tree6785c7908cbd2e045cbaa051eedfdda1f7a10417 /core/src
parentf8763b80ecd9968566018396c8cdc1851e7f8a46 (diff)
downloadspark-695d9a0fd461070ee2684b2210fb69d0b6ed1a95.tar.gz
spark-695d9a0fd461070ee2684b2210fb69d0b6ed1a95.tar.bz2
spark-695d9a0fd461070ee2684b2210fb69d0b6ed1a95.zip
[SPARK-15433] [PYSPARK] PySpark core test should not use SerDe from PythonMLLibAPI
## What changes were proposed in this pull request? Currently PySpark core test uses the `SerDe` from `PythonMLLibAPI` which includes many MLlib things. It should use `SerDeUtil` instead. ## How was this patch tested? Existing tests. Author: Liang-Chi Hsieh <simonh@tw.ibm.com> Closes #13214 from viirya/pycore-use-serdeutil.
Diffstat (limited to 'core/src')
-rw-r--r--core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala2
1 files changed, 1 insertions, 1 deletions
diff --git a/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala b/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
index 1c632ebdf9..6e4eab4b80 100644
--- a/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
+++ b/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
@@ -137,7 +137,7 @@ private[spark] object SerDeUtil extends Logging {
* Convert an RDD of Java objects to an RDD of serialized Python objects, that is usable by
* PySpark.
*/
- private[spark] def javaToPython(jRDD: JavaRDD[_]): JavaRDD[Array[Byte]] = {
+ def javaToPython(jRDD: JavaRDD[_]): JavaRDD[Array[Byte]] = {
jRDD.rdd.mapPartitions { iter => new AutoBatchedPickler(iter) }
}