aboutsummaryrefslogtreecommitdiff
path: root/examples
diff options
context:
space:
mode:
authorhuangzhaowei <carlmartinmax@gmail.com>2015-01-11 16:32:47 -0800
committerAndrew Or <andrew@databricks.com>2015-01-16 09:24:36 -0800
commit89a0990c1647f83b5479c3f61bb1ed72adc0bd40 (patch)
tree4f2f64aebc9e629048d272c81f3c89ae2d71dab2 /examples
parentb3fe6df67fe4c2f71d8424a50aac7e56f9032606 (diff)
downloadspark-89a0990c1647f83b5479c3f61bb1ed72adc0bd40.tar.gz
spark-89a0990c1647f83b5479c3f61bb1ed72adc0bd40.tar.bz2
spark-89a0990c1647f83b5479c3f61bb1ed72adc0bd40.zip
[SPARK-4033][Examples]Input of the SparkPi too big causes the emption exception
If input of the SparkPi args is larger than the 25000, the integer 'n' inside the code will be overflow, and may be a negative number. And it causes the (0 until n) Seq as an empty seq, then doing the action 'reduce' will throw the UnsupportedOperationException("empty collection"). The max size of the input of sc.parallelize is Int.MaxValue - 1, not the Int.MaxValue. Author: huangzhaowei <carlmartinmax@gmail.com> Closes #2874 from SaintBacchus/SparkPi and squashes the following commits: 62d7cd7 [huangzhaowei] Add a commit to explain the modify 4cdc388 [huangzhaowei] Update SparkPi.scala 9a2fb7b [huangzhaowei] Input of the SparkPi is too big
Diffstat (limited to 'examples')
-rw-r--r--examples/src/main/scala/org/apache/spark/examples/SparkPi.scala4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala b/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala
index 9fbb0a800d..35b8dd6c29 100644
--- a/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala
@@ -27,8 +27,8 @@ object SparkPi {
val conf = new SparkConf().setAppName("Spark Pi")
val spark = new SparkContext(conf)
val slices = if (args.length > 0) args(0).toInt else 2
- val n = 100000 * slices
- val count = spark.parallelize(1 to n, slices).map { i =>
+ val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
+ val count = spark.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y < 1) 1 else 0