aboutsummaryrefslogtreecommitdiff
path: root/core/src/main/scala/org
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2017-02-20 09:02:09 -0800
committerSean Owen <sowen@cloudera.com>2017-02-20 09:02:09 -0800
commitd0ecca6075d86bedebf8bc2278085a2cd6cb0a43 (patch)
tree4582f88e40df02916659800e8fa4068d585da63d /core/src/main/scala/org
parent776b8f17cfc687a57c005a421a81e591c8d44a3f (diff)
downloadspark-d0ecca6075d86bedebf8bc2278085a2cd6cb0a43.tar.gz
spark-d0ecca6075d86bedebf8bc2278085a2cd6cb0a43.tar.bz2
spark-d0ecca6075d86bedebf8bc2278085a2cd6cb0a43.zip
[SPARK-19646][CORE][STREAMING] binaryRecords replicates records in scala API
## What changes were proposed in this pull request? Use `BytesWritable.copyBytes`, not `getBytes`, because `getBytes` returns the underlying array, which may be reused when repeated reads don't need a different size, as is the case with binaryRecords APIs ## How was this patch tested? Existing tests Author: Sean Owen <sowen@cloudera.com> Closes #16974 from srowen/SPARK-19646.
Diffstat (limited to 'core/src/main/scala/org')
-rw-r--r--core/src/main/scala/org/apache/spark/SparkContext.scala5
1 files changed, 2 insertions, 3 deletions
diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala b/core/src/main/scala/org/apache/spark/SparkContext.scala
index e4d83893e7..17194b9f06 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -961,12 +961,11 @@ class SparkContext(config: SparkConf) extends Logging {
classOf[LongWritable],
classOf[BytesWritable],
conf = conf)
- val data = br.map { case (k, v) =>
- val bytes = v.getBytes
+ br.map { case (k, v) =>
+ val bytes = v.copyBytes()
assert(bytes.length == recordLength, "Byte array does not have correct length")
bytes
}
- data
}
/**