aboutsummaryrefslogtreecommitdiff
path: root/docs/ec2-scripts.md
diff options
context:
space:
mode:
authorCodingCat <zhunansjtu@gmail.com>2015-03-18 23:48:45 -0700
committerAaron Davidson <aaron@databricks.com>2015-03-18 23:48:45 -0700
commit2c3f83c34bb8d2c1bf13b33633d8c5a8089545d1 (patch)
treeb6f5e312b1b3b4c71c9016dc807272d784721c0a /docs/ec2-scripts.md
parent645cf3fcc21987417b2946bdeeeb60af3edf667e (diff)
downloadspark-2c3f83c34bb8d2c1bf13b33633d8c5a8089545d1.tar.gz
spark-2c3f83c34bb8d2c1bf13b33633d8c5a8089545d1.tar.bz2
spark-2c3f83c34bb8d2c1bf13b33633d8c5a8089545d1.zip
[SPARK-4012] stop SparkContext when the exception is thrown from an infinite loop
https://issues.apache.org/jira/browse/SPARK-4012 This patch is a resubmission for https://github.com/apache/spark/pull/2864 What I am proposing in this patch is that ***when the exception is thrown from an infinite loop, we should stop the SparkContext, instead of let JVM throws exception forever*** So, in the infinite loops where we originally wrapped with a ` logUncaughtExceptions`, I changed to `tryOrStopSparkContext`, so that the Spark component is stopped Early stopped JVM process is helpful for HA scheme design, for example, The user has a script checking the existence of the pid of the Spark Streaming driver for monitoring the availability; with the code before this patch, the JVM process is still available but not functional when the exceptions are thrown andrewor14, srowen , mind taking further consideration about the change? Author: CodingCat <zhunansjtu@gmail.com> Closes #5004 from CodingCat/SPARK-4012-1 and squashes the following commits: 589276a [CodingCat] throw fatal error again 3c72cd8 [CodingCat] address the comments 6087864 [CodingCat] revise comments 6ad3eb0 [CodingCat] stop SparkContext instead of quit the JVM process 6322959 [CodingCat] exit JVM process when the exception is thrown from an infinite loop
Diffstat (limited to 'docs/ec2-scripts.md')
0 files changed, 0 insertions, 0 deletions