diff options
author | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-10-18 22:49:00 -0700 |
---|---|---|
committer | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-10-18 22:49:00 -0700 |
commit | 599dcb0ddf740e028cc8faac163303be8f9400a6 (patch) | |
tree | 1c2be699552c17bf3860298570952e4048f00ed9 /pyspark | |
parent | 8de9706b86f41a37464f55e1ffe5a246adc712d1 (diff) | |
parent | 806f3a3adb19dab2ffe864226b6e5438015222eb (diff) | |
download | spark-599dcb0ddf740e028cc8faac163303be8f9400a6.tar.gz spark-599dcb0ddf740e028cc8faac163303be8f9400a6.tar.bz2 spark-599dcb0ddf740e028cc8faac163303be8f9400a6.zip |
Merge pull request #74 from rxin/kill
Job cancellation via job group id.
This PR adds a simple API to group together a set of jobs belonging to a thread and threads spawned from it. It also allows the cancellation of all jobs in this group.
An example:
sc.setJobDescription("this_is_the_group_id", "some job description")
sc.parallelize(1 to 10000, 2).map { i => Thread.sleep(10); i }.count()
In a separate thread:
sc.cancelJobGroup("this_is_the_group_id")
Diffstat (limited to 'pyspark')
0 files changed, 0 insertions, 0 deletions