diff options
author | Davies Liu <davies@databricks.com> | 2015-02-17 13:36:43 -0800 |
---|---|---|
committer | Josh Rosen <joshrosen@databricks.com> | 2015-02-17 13:36:54 -0800 |
commit | 35e23ff140cd65a4121e769ee0a4e22a3490be37 (patch) | |
tree | c94322d38cca0cfdad47256ec5b343fb86255539 /python/pyspark/context.py | |
parent | e65dc1fd58e4f9445247c9fd4e94b34550e992fb (diff) | |
download | spark-35e23ff140cd65a4121e769ee0a4e22a3490be37.tar.gz spark-35e23ff140cd65a4121e769ee0a4e22a3490be37.tar.bz2 spark-35e23ff140cd65a4121e769ee0a4e22a3490be37.zip |
[SPARK-4172] [PySpark] Progress API in Python
This patch bring the pull based progress API into Python, also a example in Python.
Author: Davies Liu <davies@databricks.com>
Closes #3027 from davies/progress_api and squashes the following commits:
b1ba984 [Davies Liu] fix style
d3b9253 [Davies Liu] add tests, mute the exception after stop
4297327 [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress_api
969fa9d [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress_api
25590c9 [Davies Liu] update with Java API
360de2d [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress_api
c0f1021 [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress_api
023afb3 [Davies Liu] add Python API and example for progress API
(cherry picked from commit 445a755b884885b88c1778fd56a3151045b0b0ed)
Signed-off-by: Josh Rosen <joshrosen@databricks.com>
Diffstat (limited to 'python/pyspark/context.py')
-rw-r--r-- | python/pyspark/context.py | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/python/pyspark/context.py b/python/pyspark/context.py index 40b3152b23..6011caf9f1 100644 --- a/python/pyspark/context.py +++ b/python/pyspark/context.py @@ -32,6 +32,7 @@ from pyspark.serializers import PickleSerializer, BatchedSerializer, UTF8Deseria from pyspark.storagelevel import StorageLevel from pyspark.rdd import RDD from pyspark.traceback_utils import CallSite, first_spark_call +from pyspark.status import StatusTracker from pyspark.profiler import ProfilerCollector, BasicProfiler from py4j.java_collections import ListConverter @@ -810,6 +811,12 @@ class SparkContext(object): """ self._jsc.sc().cancelAllJobs() + def statusTracker(self): + """ + Return :class:`StatusTracker` object + """ + return StatusTracker(self._jsc.statusTracker()) + def runJob(self, rdd, partitionFunc, partitions=None, allowLocal=False): """ Executes the given partitionFunc on the specified set of partitions, |