diff options
author | Burak Yavuz <brkyvz@gmail.com> | 2016-04-20 10:32:01 -0700 |
---|---|---|
committer | Michael Armbrust <michael@databricks.com> | 2016-04-20 10:32:01 -0700 |
commit | 80bf48f437939ddc3bb82c8c7530c8ae419f8427 (patch) | |
tree | e9be7bd9acac75d677eaad8ba69c84890d4913d3 /python/pyspark/ml/feature.py | |
parent | 834277884fcdaab4758604272881ffb2369e25f0 (diff) | |
download | spark-80bf48f437939ddc3bb82c8c7530c8ae419f8427.tar.gz spark-80bf48f437939ddc3bb82c8c7530c8ae419f8427.tar.bz2 spark-80bf48f437939ddc3bb82c8c7530c8ae419f8427.zip |
[SPARK-14555] First cut of Python API for Structured Streaming
## What changes were proposed in this pull request?
This patch provides a first cut of python APIs for structured streaming. This PR provides the new classes:
- ContinuousQuery
- Trigger
- ProcessingTime
in pyspark under `pyspark.sql.streaming`.
In addition, it contains the new methods added under:
- `DataFrameWriter`
a) `startStream`
b) `trigger`
c) `queryName`
- `DataFrameReader`
a) `stream`
- `DataFrame`
a) `isStreaming`
This PR doesn't contain all methods exposed for `ContinuousQuery`, for example:
- `exception`
- `sourceStatuses`
- `sinkStatus`
They may be added in a follow up.
This PR also contains some very minor doc fixes in the Scala side.
## How was this patch tested?
Python doc tests
TODO:
- [ ] verify Python docs look good
Author: Burak Yavuz <brkyvz@gmail.com>
Author: Burak Yavuz <burak@databricks.com>
Closes #12320 from brkyvz/stream-python.
Diffstat (limited to 'python/pyspark/ml/feature.py')
-rw-r--r-- | python/pyspark/ml/feature.py | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/python/pyspark/ml/feature.py b/python/pyspark/ml/feature.py index 4310f154b5..a1911cebe3 100644 --- a/python/pyspark/ml/feature.py +++ b/python/pyspark/ml/feature.py @@ -21,10 +21,10 @@ if sys.version > '3': from py4j.java_collections import JavaArray -from pyspark import since +from pyspark import since, keyword_only from pyspark.rdd import ignore_unicode_prefix from pyspark.ml.param.shared import * -from pyspark.ml.util import keyword_only, JavaMLReadable, JavaMLWritable +from pyspark.ml.util import JavaMLReadable, JavaMLWritable from pyspark.ml.wrapper import JavaEstimator, JavaModel, JavaTransformer, _jvm from pyspark.mllib.common import inherit_doc from pyspark.mllib.linalg import _convert_to_vector |