aboutsummaryrefslogtreecommitdiff
path: root/core
diff options
context:
space:
mode:
authortedyu <yuzhihong@gmail.com>2015-09-11 21:45:45 +0100
committerSean Owen <sowen@cloudera.com>2015-09-11 21:45:45 +0100
commitb231ab8938ae3c4fc2089cfc69c0d8164807d533 (patch)
tree1e72138bc2946e27094da14fbd3269d7602aa368 /core
parent5f46444765a377696af76af6e2c77ab14bfdab8e (diff)
downloadspark-b231ab8938ae3c4fc2089cfc69c0d8164807d533.tar.gz
spark-b231ab8938ae3c4fc2089cfc69c0d8164807d533.tar.bz2
spark-b231ab8938ae3c4fc2089cfc69c0d8164807d533.zip
[SPARK-10546] Check partitionId's range in ExternalSorter#spill()
See this thread for background: http://search-hadoop.com/m/q3RTt0rWvIkHAE81 We should check the range of partition Id and provide meaningful message through exception. Alternatively, we can use abs() and modulo to force the partition Id into legitimate range. However, expectation is that user should correct the logic error in his / her code. Author: tedyu <yuzhihong@gmail.com> Closes #8703 from tedyu/master.
Diffstat (limited to 'core')
-rw-r--r--core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala2
1 files changed, 2 insertions, 0 deletions
diff --git a/core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala b/core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala
index 138c05dff1..31230d5978 100644
--- a/core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala
+++ b/core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala
@@ -297,6 +297,8 @@ private[spark] class ExternalSorter[K, V, C](
val it = collection.destructiveSortedWritablePartitionedIterator(comparator)
while (it.hasNext) {
val partitionId = it.nextPartition()
+ require(partitionId >= 0 && partitionId < numPartitions,
+ s"partition Id: ${partitionId} should be in the range [0, ${numPartitions})")
it.writeNext(writer)
elementsPerPartition(partitionId) += 1
objectsWritten += 1