aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorcocoatomo <cocoatomo77@gmail.com>2014-10-11 11:51:59 -0700
committerJosh Rosen <joshrosen@apache.org>2014-10-11 11:51:59 -0700
commit7a3f589ef86200f99624fea8322e5af0cad774a7 (patch)
tree2e0a27e69e22b5bb7640a1d5e20997f411445a4c
parent81015a2ba49583d730ce65b2262f50f1f2451a79 (diff)
downloadspark-7a3f589ef86200f99624fea8322e5af0cad774a7.tar.gz
spark-7a3f589ef86200f99624fea8322e5af0cad774a7.tar.bz2
spark-7a3f589ef86200f99624fea8322e5af0cad774a7.zip
[SPARK-3909][PySpark][Doc] A corrupted format in Sphinx documents and building warnings
Sphinx documents contains a corrupted ReST format and have some warnings. The purpose of this issue is same as https://issues.apache.org/jira/browse/SPARK-3773. commit: 0e8203f4fb721158fb27897680da476174d24c4b output ``` $ cd ./python/docs $ make clean html rm -rf _build/* sphinx-build -b html -d _build/doctrees . _build/html Making output directory... Running Sphinx v1.2.3 loading pickled environment... not yet created building [html]: targets for 4 source files that are out of date updating environment: 4 added, 0 changed, 0 removed reading sources... [100%] pyspark.sql /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/feature.py:docstring of pyspark.mllib.feature.Word2VecModel.findSynonyms:4: WARNING: Field list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/feature.py:docstring of pyspark.mllib.feature.Word2VecModel.transform:3: WARNING: Field list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/sql.py:docstring of pyspark.sql:4: WARNING: Bullet list ends without a blank line; unexpected unindent. looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] pyspark.sql writing additional files... (12 module code pages) _modules/index search copying static files... WARNING: html_static_path entry u'/Users/<user>/MyRepos/Scala/spark/python/docs/_static' does not exist done copying extra files... done dumping search index... done dumping object inventory... done build succeeded, 4 warnings. Build finished. The HTML pages are in _build/html. ``` Author: cocoatomo <cocoatomo77@gmail.com> Closes #2766 from cocoatomo/issues/3909-sphinx-build-warnings and squashes the following commits: 2c7faa8 [cocoatomo] [SPARK-3909][PySpark][Doc] A corrupted format in Sphinx documents and building warnings
-rw-r--r--python/docs/conf.py2
-rw-r--r--python/pyspark/mllib/feature.py2
-rw-r--r--python/pyspark/rdd.py2
-rw-r--r--python/pyspark/sql.py10
4 files changed, 9 insertions, 7 deletions
diff --git a/python/docs/conf.py b/python/docs/conf.py
index 8e6324f058..e58d97ae6a 100644
--- a/python/docs/conf.py
+++ b/python/docs/conf.py
@@ -131,7 +131,7 @@ html_logo = "../../docs/img/spark-logo-hd.png"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
+#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
diff --git a/python/pyspark/mllib/feature.py b/python/pyspark/mllib/feature.py
index a44a27fd3b..f4cbf31b94 100644
--- a/python/pyspark/mllib/feature.py
+++ b/python/pyspark/mllib/feature.py
@@ -44,6 +44,7 @@ class Word2VecModel(object):
"""
:param word: a word
:return: vector representation of word
+
Transforms a word to its vector representation
Note: local use only
@@ -57,6 +58,7 @@ class Word2VecModel(object):
:param x: a word or a vector representation of word
:param num: number of synonyms to find
:return: array of (word, cosineSimilarity)
+
Find synonyms of a word
Note: local use only
diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py
index 6797d50659..e13bab946c 100644
--- a/python/pyspark/rdd.py
+++ b/python/pyspark/rdd.py
@@ -2009,7 +2009,7 @@ class RDD(object):
of The Art Cardinality Estimation Algorithm", available
<a href="http://dx.doi.org/10.1145/2452376.2452456">here</a>.
- :param relativeSD Relative accuracy. Smaller values create
+ :param relativeSD: Relative accuracy. Smaller values create
counters that require more space.
It must be greater than 0.000017.
diff --git a/python/pyspark/sql.py b/python/pyspark/sql.py
index d3d36eb995..b31a82f9b1 100644
--- a/python/pyspark/sql.py
+++ b/python/pyspark/sql.py
@@ -19,14 +19,14 @@
public classes of Spark SQL:
- L{SQLContext}
- Main entry point for SQL functionality.
+ Main entry point for SQL functionality.
- L{SchemaRDD}
- A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
- addition to normal RDD operations, SchemaRDDs also support SQL.
+ A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
+ addition to normal RDD operations, SchemaRDDs also support SQL.
- L{Row}
- A Row of data returned by a Spark SQL query.
+ A Row of data returned by a Spark SQL query.
- L{HiveContext}
- Main entry point for accessing data stored in Apache Hive..
+ Main entry point for accessing data stored in Apache Hive..
"""
import itertools