diff options
author | Xiangrui Meng <meng@databricks.com> | 2015-03-05 11:50:09 -0800 |
---|---|---|
committer | Xiangrui Meng <meng@databricks.com> | 2015-03-05 11:50:09 -0800 |
commit | 0bfacd5c5dd7d10a69bcbcbda630f0843d1cf285 (patch) | |
tree | 2b13352131bb3dbd88e4214c6c7728d26898d25e /python/docs | |
parent | c9cfba0cebe3eb546e3e96f3e5b9b89a74c5b7de (diff) | |
download | spark-0bfacd5c5dd7d10a69bcbcbda630f0843d1cf285.tar.gz spark-0bfacd5c5dd7d10a69bcbcbda630f0843d1cf285.tar.bz2 spark-0bfacd5c5dd7d10a69bcbcbda630f0843d1cf285.zip |
[SPARK-6090][MLLIB] add a basic BinaryClassificationMetrics to PySpark/MLlib
A simple wrapper around the Scala implementation. `DataFrame` is used for serialization/deserialization. Methods that return `RDD`s are not supported in this PR.
davies If we recognize Scala's `Product`s in Py4J, we can easily add wrappers for Scala methods that returns `RDD[(Double, Double)]`. Is it easy to register serializer for `Product` in PySpark?
Author: Xiangrui Meng <meng@databricks.com>
Closes #4863 from mengxr/SPARK-6090 and squashes the following commits:
009a3a3 [Xiangrui Meng] provide schema
dcddab5 [Xiangrui Meng] add a basic BinaryClassificationMetrics to PySpark/MLlib
Diffstat (limited to 'python/docs')
-rw-r--r-- | python/docs/pyspark.mllib.rst | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/python/docs/pyspark.mllib.rst b/python/docs/pyspark.mllib.rst index b706c5e376..15101470af 100644 --- a/python/docs/pyspark.mllib.rst +++ b/python/docs/pyspark.mllib.rst @@ -16,6 +16,13 @@ pyspark.mllib.clustering module :members: :undoc-members: +pyspark.mllib.evaluation module +------------------------------- + +.. automodule:: pyspark.mllib.evaluation + :members: + :undoc-members: + pyspark.mllib.feature module ------------------------------- |