aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorcocoatomo <cocoatomo77@gmail.com>2014-10-06 14:08:40 -0700
committerJosh Rosen <joshrosen@apache.org>2014-10-06 14:08:40 -0700
commit2300eb58ae79a86e65b3ff608a578f5d4c09892b (patch)
treee6678bcf20acbb9b9745a0aef13409190db785d2
parent4f01265f7d62e070ba42c251255e385644c1b16c (diff)
downloadspark-2300eb58ae79a86e65b3ff608a578f5d4c09892b.tar.gz
spark-2300eb58ae79a86e65b3ff608a578f5d4c09892b.tar.bz2
spark-2300eb58ae79a86e65b3ff608a578f5d4c09892b.zip
[SPARK-3773][PySpark][Doc] Sphinx build warning
When building Sphinx documents for PySpark, we have 12 warnings. Their causes are almost docstrings in broken ReST format. To reproduce this issue, we should run following commands on the commit: 6e27cb630de69fa5acb510b4e2f6b980742b1957. ```bash $ cd ./python/docs $ make clean html ... /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.SparkContext.sequenceFile:4: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.RDD.saveAsSequenceFile:4: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:14: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:14: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/docs/pyspark.mllib.rst:50: WARNING: missing attribute mentioned in :members: or __all__: module pyspark.mllib.regression, attribute RidgeRegressionModelLinearRegressionWithSGD /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/tree.py:docstring of pyspark.mllib.tree.DecisionTreeModel.predict:3: ERROR: Unexpected indentation. ... checking consistency... /Users/<user>/MyRepos/Scala/spark/python/docs/modules.rst:: WARNING: document isn't included in any toctree ... copying static files... WARNING: html_static_path entry u'/Users/<user>/MyRepos/Scala/spark/python/docs/_static' does not exist ... build succeeded, 12 warnings. ``` Author: cocoatomo <cocoatomo77@gmail.com> Closes #2653 from cocoatomo/issues/3773-sphinx-build-warnings and squashes the following commits: 6f65661 [cocoatomo] [SPARK-3773][PySpark][Doc] Sphinx build warning
-rw-r--r--python/docs/modules.rst7
-rw-r--r--python/pyspark/context.py1
-rw-r--r--python/pyspark/mllib/classification.py26
-rw-r--r--python/pyspark/mllib/regression.py15
-rw-r--r--python/pyspark/mllib/tree.py1
-rw-r--r--python/pyspark/rdd.py1
6 files changed, 28 insertions, 23 deletions
diff --git a/python/docs/modules.rst b/python/docs/modules.rst
deleted file mode 100644
index 183564659f..0000000000
--- a/python/docs/modules.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-.
-=
-
-.. toctree::
- :maxdepth: 4
-
- pyspark
diff --git a/python/pyspark/context.py b/python/pyspark/context.py
index e9418320ff..a45d79d642 100644
--- a/python/pyspark/context.py
+++ b/python/pyspark/context.py
@@ -410,6 +410,7 @@ class SparkContext(object):
Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS,
a local file system (available on all nodes), or any Hadoop-supported file system URI.
The mechanism is as follows:
+
1. A Java RDD is created from the SequenceFile or other InputFormat, and the key
and value Writable classes
2. Serialization is attempted via Pyrolite pickling
diff --git a/python/pyspark/mllib/classification.py b/python/pyspark/mllib/classification.py
index ac142fb49a..a765b1c4f7 100644
--- a/python/pyspark/mllib/classification.py
+++ b/python/pyspark/mllib/classification.py
@@ -89,11 +89,14 @@ class LogisticRegressionWithSGD(object):
@param regParam: The regularizer parameter (default: 1.0).
@param regType: The type of regularizer used for training
our model.
- Allowed values: "l1" for using L1Updater,
- "l2" for using
- SquaredL2Updater,
- "none" for no regularizer.
- (default: "none")
+
+ :Allowed values:
+ - "l1" for using L1Updater
+ - "l2" for using SquaredL2Updater
+ - "none" for no regularizer
+
+ (default: "none")
+
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
@@ -158,11 +161,14 @@ class SVMWithSGD(object):
@param initialWeights: The initial weights (default: None).
@param regType: The type of regularizer used for training
our model.
- Allowed values: "l1" for using L1Updater,
- "l2" for using
- SquaredL2Updater,
- "none" for no regularizer.
- (default: "none")
+
+ :Allowed values:
+ - "l1" for using L1Updater
+ - "l2" for using SquaredL2Updater,
+ - "none" for no regularizer.
+
+ (default: "none")
+
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
diff --git a/python/pyspark/mllib/regression.py b/python/pyspark/mllib/regression.py
index 8fe8c6db2a..54f34a9833 100644
--- a/python/pyspark/mllib/regression.py
+++ b/python/pyspark/mllib/regression.py
@@ -22,7 +22,7 @@ from pyspark import SparkContext
from pyspark.mllib.linalg import SparseVector, _convert_to_vector
from pyspark.serializers import PickleSerializer, AutoBatchedSerializer
-__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel'
+__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel',
'LinearRegressionWithSGD', 'LassoWithSGD', 'RidgeRegressionWithSGD']
@@ -155,11 +155,14 @@ class LinearRegressionWithSGD(object):
@param regParam: The regularizer parameter (default: 1.0).
@param regType: The type of regularizer used for training
our model.
- Allowed values: "l1" for using L1Updater,
- "l2" for using
- SquaredL2Updater,
- "none" for no regularizer.
- (default: "none")
+
+ :Allowed values:
+ - "l1" for using L1Updater,
+ - "l2" for using SquaredL2Updater,
+ - "none" for no regularizer.
+
+ (default: "none")
+
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
diff --git a/python/pyspark/mllib/tree.py b/python/pyspark/mllib/tree.py
index afdcdbdf3a..5d7abfb96b 100644
--- a/python/pyspark/mllib/tree.py
+++ b/python/pyspark/mllib/tree.py
@@ -48,6 +48,7 @@ class DecisionTreeModel(object):
def predict(self, x):
"""
Predict the label of one or more examples.
+
:param x: Data point (feature vector),
or an RDD of data points (feature vectors).
"""
diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py
index dc6497772e..e77669aad7 100644
--- a/python/pyspark/rdd.py
+++ b/python/pyspark/rdd.py
@@ -1208,6 +1208,7 @@ class RDD(object):
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
system, using the L{org.apache.hadoop.io.Writable} types that we convert from the
RDD's key and value types. The mechanism is as follows:
+
1. Pyrolite is used to convert pickled Python RDD into RDD of Java objects.
2. Keys and values of this Java RDD are converted to Writables and written out.