diff options
author | Xiangrui Meng <meng@databricks.com> | 2015-05-21 22:57:33 -0700 |
---|---|---|
committer | Xiangrui Meng <meng@databricks.com> | 2015-05-21 22:57:33 -0700 |
commit | 8f11c6116bf8c7246682cbb2d6f27bf0f1531c6d (patch) | |
tree | 144b7d5b9ec1215e88d05539f51e042a6d39470c /python/pyspark/ml/wrapper.py | |
parent | e4136ea6c457bc74cee312aa14974498ab4633eb (diff) | |
download | spark-8f11c6116bf8c7246682cbb2d6f27bf0f1531c6d.tar.gz spark-8f11c6116bf8c7246682cbb2d6f27bf0f1531c6d.tar.bz2 spark-8f11c6116bf8c7246682cbb2d6f27bf0f1531c6d.zip |
[SPARK-7535] [.0] [MLLIB] Audit the pipeline APIs for 1.4
Some changes to the pipeilne APIs:
1. Estimator/Transformer/ doesn’t need to extend Params since PipelineStage already does.
1. Move Evaluator to ml.evaluation.
1. Mention larger metric values are better.
1. PipelineModel doc. “compiled” -> “fitted”
1. Hide object PolynomialExpansion.
1. Hide object VectorAssembler.
1. Word2Vec.minCount (and other) -> group param
1. ParamValidators -> DeveloperApi
1. Hide MetadataUtils/SchemaUtils.
jkbradley
Author: Xiangrui Meng <meng@databricks.com>
Closes #6322 from mengxr/SPARK-7535.0 and squashes the following commits:
9e9c7da [Xiangrui Meng] move JavaEvaluator to ml.evaluation as well
e179480 [Xiangrui Meng] move Evaluation to ml.evaluation in PySpark
08ef61f [Xiangrui Meng] update pipieline APIs
Diffstat (limited to 'python/pyspark/ml/wrapper.py')
-rw-r--r-- | python/pyspark/ml/wrapper.py | 21 |
1 files changed, 1 insertions, 20 deletions
diff --git a/python/pyspark/ml/wrapper.py b/python/pyspark/ml/wrapper.py index 4419e16184..7b0893e2cd 100644 --- a/python/pyspark/ml/wrapper.py +++ b/python/pyspark/ml/wrapper.py @@ -20,7 +20,7 @@ from abc import ABCMeta from pyspark import SparkContext from pyspark.sql import DataFrame from pyspark.ml.param import Params -from pyspark.ml.pipeline import Estimator, Transformer, Evaluator, Model +from pyspark.ml.pipeline import Estimator, Transformer, Model from pyspark.mllib.common import inherit_doc, _java2py, _py2java @@ -185,22 +185,3 @@ class JavaModel(Model, JavaTransformer): sc = SparkContext._active_spark_context java_args = [_py2java(sc, arg) for arg in args] return _java2py(sc, m(*java_args)) - - -@inherit_doc -class JavaEvaluator(Evaluator, JavaWrapper): - """ - Base class for :py:class:`Evaluator`s that wrap Java/Scala - implementations. - """ - - __metaclass__ = ABCMeta - - def _evaluate(self, dataset): - """ - Evaluates the output. - :param dataset: a dataset that contains labels/observations and predictions. - :return: evaluation metric - """ - self._transfer_params_to_java() - return self._java_obj.evaluate(dataset._jdf) |