aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/sql/streaming.py
diff options
context:
space:
mode:
authorXiangrui Meng <meng@databricks.com>2016-06-14 18:57:45 -0700
committerYanbo Liang <ybliang8@gmail.com>2016-06-14 18:57:45 -0700
commit63e0aebe22ba41c636ecaddd8647721d7690a1ec (patch)
tree666ea76b8d347441d7a0f5116db56304f81ef16a /python/pyspark/sql/streaming.py
parent42a28caf1001244d617b9256de196129348f2fef (diff)
downloadspark-63e0aebe22ba41c636ecaddd8647721d7690a1ec.tar.gz
spark-63e0aebe22ba41c636ecaddd8647721d7690a1ec.tar.bz2
spark-63e0aebe22ba41c636ecaddd8647721d7690a1ec.zip
[SPARK-15945][MLLIB] Conversion between old/new vector columns in a DataFrame (Scala/Java)
## What changes were proposed in this pull request? This PR provides conversion utils between old/new vector columns in a DataFrame. So users can use it to migrate their datasets and pipelines manually. The methods are implemented under `MLUtils` and called `convertVectorColumnsToML` and `convertVectorColumnsFromML`. Both take a DataFrame and a list of vector columns to be converted. It is a no-op on vector columns that are already converted. A warning message is logged if actual conversion happens. This is the first sub-task under SPARK-15944 to make it easier to migrate existing pipelines to Spark 2.0. ## How was this patch tested? Unit tests in Scala and Java. cc: yanboliang Author: Xiangrui Meng <meng@databricks.com> Closes #13662 from mengxr/SPARK-15945.
Diffstat (limited to 'python/pyspark/sql/streaming.py')
0 files changed, 0 insertions, 0 deletions