diff options
author | Davies Liu <davies.liu@gmail.com> | 2014-10-07 18:09:27 -0700 |
---|---|---|
committer | Josh Rosen <joshrosen@apache.org> | 2014-10-07 18:09:27 -0700 |
commit | 798ed22c289cf65f2249bf2f4250285685ca69e7 (patch) | |
tree | 137d93c32454aaf39e6416823a8604f816f73926 /python/pyspark/__init__.py | |
parent | b69c9fb6fb048509bbd8430fb697dc3a5ca4fe59 (diff) | |
download | spark-798ed22c289cf65f2249bf2f4250285685ca69e7.tar.gz spark-798ed22c289cf65f2249bf2f4250285685ca69e7.tar.bz2 spark-798ed22c289cf65f2249bf2f4250285685ca69e7.zip |
[SPARK-3412] [PySpark] Replace Epydoc with Sphinx to generate Python API docs
Retire Epydoc, use Sphinx to generate API docs.
Refine Sphinx docs, also convert some docstrings into Sphinx style.
It looks like:
![api doc](https://cloud.githubusercontent.com/assets/40902/4538272/9e2d4f10-4dec-11e4-8d96-6e45a8fe51f9.png)
Author: Davies Liu <davies.liu@gmail.com>
Closes #2689 from davies/docs and squashes the following commits:
bf4a0a5 [Davies Liu] fix links
3fb1572 [Davies Liu] fix _static in jekyll
65a287e [Davies Liu] fix scripts and logo
8524042 [Davies Liu] Merge branch 'master' of github.com:apache/spark into docs
d5b874a [Davies Liu] Merge branch 'master' of github.com:apache/spark into docs
4bc1c3c [Davies Liu] refactor
746d0b6 [Davies Liu] @param -> :param
240b393 [Davies Liu] replace epydoc with sphinx doc
Diffstat (limited to 'python/pyspark/__init__.py')
-rw-r--r-- | python/pyspark/__init__.py | 26 |
1 files changed, 7 insertions, 19 deletions
diff --git a/python/pyspark/__init__.py b/python/pyspark/__init__.py index 1a2e774738..e39e6514d7 100644 --- a/python/pyspark/__init__.py +++ b/python/pyspark/__init__.py @@ -20,33 +20,21 @@ PySpark is the Python API for Spark. Public classes: - - L{SparkContext<pyspark.context.SparkContext>} + - :class:`SparkContext`: Main entry point for Spark functionality. - - L{RDD<pyspark.rdd.RDD>} + - L{RDD} A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. - - L{Broadcast<pyspark.broadcast.Broadcast>} + - L{Broadcast} A broadcast variable that gets reused across tasks. - - L{Accumulator<pyspark.accumulators.Accumulator>} + - L{Accumulator} An "add-only" shared variable that tasks can only add values to. - - L{SparkConf<pyspark.conf.SparkConf>} + - L{SparkConf} For configuring Spark. - - L{SparkFiles<pyspark.files.SparkFiles>} + - L{SparkFiles} Access files shipped with jobs. - - L{StorageLevel<pyspark.storagelevel.StorageLevel>} + - L{StorageLevel} Finer-grained cache persistence levels. -Spark SQL: - - L{SQLContext<pyspark.sql.SQLContext>} - Main entry point for SQL functionality. - - L{SchemaRDD<pyspark.sql.SchemaRDD>} - A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In - addition to normal RDD operations, SchemaRDDs also support SQL. - - L{Row<pyspark.sql.Row>} - A Row of data returned by a Spark SQL query. - -Hive: - - L{HiveContext<pyspark.context.HiveContext>} - Main entry point for accessing data stored in Apache Hive.. """ # The following block allows us to import python's random instead of mllib.random for scripts in |