aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/ml/classification.py
diff options
context:
space:
mode:
authorBryan Cutler <cutlerb@gmail.com>2017-01-31 15:42:36 -0800
committerHolden Karau <holden@us.ibm.com>2017-01-31 15:42:36 -0800
commit57d70d26c88819360cdc806e7124aa2cc1b9e4c5 (patch)
tree989a46211f9f6e7069dd77a41bf3f805716f863d /python/pyspark/ml/classification.py
parentce112cec4f9bff222aa256893f94c316662a2a7e (diff)
downloadspark-57d70d26c88819360cdc806e7124aa2cc1b9e4c5.tar.gz
spark-57d70d26c88819360cdc806e7124aa2cc1b9e4c5.tar.bz2
spark-57d70d26c88819360cdc806e7124aa2cc1b9e4c5.zip
[SPARK-17161][PYSPARK][ML] Add PySpark-ML JavaWrapper convenience function to create Py4J JavaArrays
## What changes were proposed in this pull request? Adding convenience function to Python `JavaWrapper` so that it is easy to create a Py4J JavaArray that is compatible with current class constructors that have a Scala `Array` as input so that it is not necessary to have a Java/Python friendly constructor. The function takes a Java class as input that is used by Py4J to create the Java array of the given class. As an example, `OneVsRest` has been updated to use this and the alternate constructor is removed. ## How was this patch tested? Added unit tests for the new convenience function and updated `OneVsRest` doctests which use this to persist the model. Author: Bryan Cutler <cutlerb@gmail.com> Closes #14725 from BryanCutler/pyspark-new_java_array-CountVectorizer-SPARK-17161.
Diffstat (limited to 'python/pyspark/ml/classification.py')
-rw-r--r--python/pyspark/ml/classification.py11
1 files changed, 10 insertions, 1 deletions
diff --git a/python/pyspark/ml/classification.py b/python/pyspark/ml/classification.py
index f10556ca92..d41fc81fd7 100644
--- a/python/pyspark/ml/classification.py
+++ b/python/pyspark/ml/classification.py
@@ -1517,6 +1517,11 @@ class OneVsRest(Estimator, OneVsRestParams, MLReadable, MLWritable):
>>> test2 = sc.parallelize([Row(features=Vectors.dense(0.5, 0.4))]).toDF()
>>> model.transform(test2).head().prediction
2.0
+ >>> model_path = temp_path + "/ovr_model"
+ >>> model.save(model_path)
+ >>> model2 = OneVsRestModel.load(model_path)
+ >>> model2.transform(test0).head().prediction
+ 1.0
.. versionadded:: 2.0.0
"""
@@ -1759,9 +1764,13 @@ class OneVsRestModel(Model, OneVsRestParams, MLReadable, MLWritable):
:return: Java object equivalent to this instance.
"""
+ sc = SparkContext._active_spark_context
java_models = [model._to_java() for model in self.models]
+ java_models_array = JavaWrapper._new_java_array(
+ java_models, sc._gateway.jvm.org.apache.spark.ml.classification.ClassificationModel)
+ metadata = JavaParams._new_java_obj("org.apache.spark.sql.types.Metadata")
_java_obj = JavaParams._new_java_obj("org.apache.spark.ml.classification.OneVsRestModel",
- self.uid, java_models)
+ self.uid, metadata.empty(), java_models_array)
_java_obj.set("classifier", self.getClassifier()._to_java())
_java_obj.set("featuresCol", self.getFeaturesCol())
_java_obj.set("labelCol", self.getLabelCol())