From 2182e4322da6ba732f99ae75dce00f76f1cdc4d9 Mon Sep 17 00:00:00 2001 From: Nicholas Chammas Date: Fri, 29 Jul 2016 14:07:03 -0700 Subject: [SPARK-16772][PYTHON][DOCS] Restore "datatype string" to Python API docstrings ## What changes were proposed in this pull request? This PR corrects [an error made in an earlier PR](https://github.com/apache/spark/pull/14393/files#r72843069). ## How was this patch tested? ```sh $ ./dev/lint-python PEP8 checks passed. rm -rf _build/* pydoc checks passed. ``` I also built the docs and confirmed that they looked good in my browser. Author: Nicholas Chammas Closes #14408 from nchammas/SPARK-16772. --- python/pyspark/sql/context.py | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) (limited to 'python/pyspark/sql/context.py') diff --git a/python/pyspark/sql/context.py b/python/pyspark/sql/context.py index f7009fe589..4085f165f4 100644 --- a/python/pyspark/sql/context.py +++ b/python/pyspark/sql/context.py @@ -226,9 +226,8 @@ class SQLContext(object): from ``data``, which should be an RDD of :class:`Row`, or :class:`namedtuple`, or :class:`dict`. - When ``schema`` is :class:`pyspark.sql.types.DataType` or - :class:`pyspark.sql.types.StringType`, it must match the - real data, or an exception will be thrown at runtime. If the given schema is not + When ``schema`` is :class:`pyspark.sql.types.DataType` or a datatype string it must match + the real data, or an exception will be thrown at runtime. If the given schema is not :class:`pyspark.sql.types.StructType`, it will be wrapped into a :class:`pyspark.sql.types.StructType` as its only field, and the field name will be "value", each record will also be wrapped into a tuple, which can be converted to row later. @@ -239,8 +238,7 @@ class SQLContext(object): :param data: an RDD of any kind of SQL data representation(e.g. :class:`Row`, :class:`tuple`, ``int``, ``boolean``, etc.), or :class:`list`, or :class:`pandas.DataFrame`. - :param schema: a :class:`pyspark.sql.types.DataType` or a - :class:`pyspark.sql.types.StringType` or a list of + :param schema: a :class:`pyspark.sql.types.DataType` or a datatype string or a list of column names, default is None. The data type string format equals to :class:`pyspark.sql.types.DataType.simpleString`, except that top level struct type can omit the ``struct<>`` and atomic types use ``typeName()`` as their format, e.g. use @@ -251,7 +249,7 @@ class SQLContext(object): .. versionchanged:: 2.0 The ``schema`` parameter can be a :class:`pyspark.sql.types.DataType` or a - :class:`pyspark.sql.types.StringType` after 2.0. + datatype string after 2.0. If it's not a :class:`pyspark.sql.types.StructType`, it will be wrapped into a :class:`pyspark.sql.types.StructType` and each record will also be wrapped into a tuple. -- cgit v1.2.3