aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/sql
diff options
context:
space:
mode:
authorNicholas Chammas <nicholas.chammas@gmail.com>2016-08-06 05:02:59 +0100
committerSean Owen <sowen@cloudera.com>2016-08-06 05:02:59 +0100
commit2dd03886173f2f3b5c20fe14e9cdbd33480c1f36 (patch)
treeb339b17495d5fe0b214255d01fcab451f51002b6 /python/pyspark/sql
parent14dba45208d8a5511be2cf8ddf22e688ef141e88 (diff)
downloadspark-2dd03886173f2f3b5c20fe14e9cdbd33480c1f36.tar.gz
spark-2dd03886173f2f3b5c20fe14e9cdbd33480c1f36.tar.bz2
spark-2dd03886173f2f3b5c20fe14e9cdbd33480c1f36.zip
[SPARK-16772][PYTHON][DOCS] Fix API doc references to UDFRegistration + Update "important classes"
## Proposed Changes * Update the list of "important classes" in `pyspark.sql` to match 2.0. * Fix references to `UDFRegistration` so that the class shows up in the docs. It currently [doesn't](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html). * Remove some unnecessary whitespace in the Python RST doc files. I reused the [existing JIRA](https://issues.apache.org/jira/browse/SPARK-16772) I created last week for similar API doc fixes. ## How was this patch tested? * I ran `lint-python` successfully. * I ran `make clean build` on the Python docs and confirmed the results are as expected locally in my browser. Author: Nicholas Chammas <nicholas.chammas@gmail.com> Closes #14496 from nchammas/SPARK-16772-UDFRegistration.
Diffstat (limited to 'python/pyspark/sql')
-rw-r--r--python/pyspark/sql/__init__.py11
1 files changed, 5 insertions, 6 deletions
diff --git a/python/pyspark/sql/__init__.py b/python/pyspark/sql/__init__.py
index cff73ff192..22ec416f6c 100644
--- a/python/pyspark/sql/__init__.py
+++ b/python/pyspark/sql/__init__.py
@@ -18,7 +18,7 @@
"""
Important classes of Spark SQL and DataFrames:
- - :class:`pyspark.sql.SQLContext`
+ - :class:`pyspark.sql.SparkSession`
Main entry point for :class:`DataFrame` and SQL functionality.
- :class:`pyspark.sql.DataFrame`
A distributed collection of data grouped into named columns.
@@ -26,8 +26,6 @@ Important classes of Spark SQL and DataFrames:
A column expression in a :class:`DataFrame`.
- :class:`pyspark.sql.Row`
A row of data in a :class:`DataFrame`.
- - :class:`pyspark.sql.HiveContext`
- Main entry point for accessing data stored in Apache Hive.
- :class:`pyspark.sql.GroupedData`
Aggregation methods, returned by :func:`DataFrame.groupBy`.
- :class:`pyspark.sql.DataFrameNaFunctions`
@@ -45,7 +43,7 @@ from __future__ import absolute_import
from pyspark.sql.types import Row
-from pyspark.sql.context import SQLContext, HiveContext
+from pyspark.sql.context import SQLContext, HiveContext, UDFRegistration
from pyspark.sql.session import SparkSession
from pyspark.sql.column import Column
from pyspark.sql.dataframe import DataFrame, DataFrameNaFunctions, DataFrameStatFunctions
@@ -55,7 +53,8 @@ from pyspark.sql.window import Window, WindowSpec
__all__ = [
- 'SparkSession', 'SQLContext', 'HiveContext', 'DataFrame', 'GroupedData', 'Column',
- 'Row', 'DataFrameNaFunctions', 'DataFrameStatFunctions', 'Window', 'WindowSpec',
+ 'SparkSession', 'SQLContext', 'HiveContext', 'UDFRegistration',
+ 'DataFrame', 'GroupedData', 'Column', 'Row',
+ 'DataFrameNaFunctions', 'DataFrameStatFunctions', 'Window', 'WindowSpec',
'DataFrameReader', 'DataFrameWriter'
]