diff options
author | Reynold Xin <rxin@databricks.com> | 2015-01-15 16:15:12 -0800 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2015-01-15 16:15:12 -0800 |
commit | 1881431dd50e93a6948e4966d33742727f27e917 (patch) | |
tree | 012aa377cb3d891ce563f5225407176b55936081 /python/pyspark/sql.py | |
parent | 3c8650c12ad7a97852e7bd76153210493fd83e92 (diff) | |
download | spark-1881431dd50e93a6948e4966d33742727f27e917.tar.gz spark-1881431dd50e93a6948e4966d33742727f27e917.tar.bz2 spark-1881431dd50e93a6948e4966d33742727f27e917.zip |
[SPARK-5274][SQL] Reconcile Java and Scala UDFRegistration.
As part of SPARK-5193:
1. Removed UDFRegistration as a mixin in SQLContext and made it a field ("udf").
2. For Java UDFs, renamed dataType to returnType.
3. For Scala UDFs, added type tags.
4. Added all Java UDF registration methods to Scala's UDFRegistration.
5. Documentation
Author: Reynold Xin <rxin@databricks.com>
Closes #4056 from rxin/udf-registration and squashes the following commits:
ae9c556 [Reynold Xin] Updated example.
675a3c9 [Reynold Xin] Style fix
47c24ff [Reynold Xin] Python fix.
5f00c45 [Reynold Xin] Restore data type position in java udf and added typetags.
032f006 [Reynold Xin] [SPARK-5193][SQL] Reconcile Java and Scala UDFRegistration.
Diffstat (limited to 'python/pyspark/sql.py')
-rw-r--r-- | python/pyspark/sql.py | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/python/pyspark/sql.py b/python/pyspark/sql.py index 014ac1791c..dcd3b60a60 100644 --- a/python/pyspark/sql.py +++ b/python/pyspark/sql.py @@ -1281,14 +1281,14 @@ class SQLContext(object): self._sc._gateway._gateway_client) includes = ListConverter().convert(self._sc._python_includes, self._sc._gateway._gateway_client) - self._ssql_ctx.registerPython(name, - bytearray(pickled_command), - env, - includes, - self._sc.pythonExec, - broadcast_vars, - self._sc._javaAccumulator, - returnType.json()) + self._ssql_ctx.udf().registerPython(name, + bytearray(pickled_command), + env, + includes, + self._sc.pythonExec, + broadcast_vars, + self._sc._javaAccumulator, + returnType.json()) def inferSchema(self, rdd, samplingRatio=None): """Infer and apply a schema to an RDD of L{Row}. |