aboutsummaryrefslogtreecommitdiff
path: root/core
diff options
context:
space:
mode:
authorwm624@hotmail.com <wm624@hotmail.com>2016-09-14 09:49:15 +0100
committerSean Owen <sowen@cloudera.com>2016-09-14 09:49:15 +0100
commit18b4f035f40359b3164456d0dab52dbc762ea3b4 (patch)
treea1c070886692f895701a6503de77c2b43b7db301 /core
parentb5bfcddbfbc2e79d3d0fbd43942716946e6c4ba3 (diff)
downloadspark-18b4f035f40359b3164456d0dab52dbc762ea3b4.tar.gz
spark-18b4f035f40359b3164456d0dab52dbc762ea3b4.tar.bz2
spark-18b4f035f40359b3164456d0dab52dbc762ea3b4.zip
[CORE][DOC] remove redundant comment
## What changes were proposed in this pull request? In the comment, there is redundant `the estimated`. This PR simply remove the redundant comment and adjusts format. Author: wm624@hotmail.com <wm624@hotmail.com> Closes #15091 from wangmiao1981/comment.
Diffstat (limited to 'core')
-rw-r--r--core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala18
1 files changed, 9 insertions, 9 deletions
diff --git a/core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala b/core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala
index 1a3bf2bb67..baa3fde2d0 100644
--- a/core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala
+++ b/core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala
@@ -169,12 +169,12 @@ private[spark] class MemoryStore(
* temporary unroll memory used during the materialization is "transferred" to storage memory,
* so we won't acquire more memory than is actually needed to store the block.
*
- * @return in case of success, the estimated the estimated size of the stored data. In case of
- * failure, return an iterator containing the values of the block. The returned iterator
- * will be backed by the combination of the partially-unrolled block and the remaining
- * elements of the original input iterator. The caller must either fully consume this
- * iterator or call `close()` on it in order to free the storage memory consumed by the
- * partially-unrolled block.
+ * @return in case of success, the estimated size of the stored data. In case of failure, return
+ * an iterator containing the values of the block. The returned iterator will be backed
+ * by the combination of the partially-unrolled block and the remaining elements of the
+ * original input iterator. The caller must either fully consume this iterator or call
+ * `close()` on it in order to free the storage memory consumed by the partially-unrolled
+ * block.
*/
private[storage] def putIteratorAsValues[T](
blockId: BlockId,
@@ -298,9 +298,9 @@ private[spark] class MemoryStore(
* temporary unroll memory used during the materialization is "transferred" to storage memory,
* so we won't acquire more memory than is actually needed to store the block.
*
- * @return in case of success, the estimated the estimated size of the stored data. In case of
- * failure, return a handle which allows the caller to either finish the serialization
- * by spilling to disk or to deserialize the partially-serialized block and reconstruct
+ * @return in case of success, the estimated size of the stored data. In case of failure,
+ * return a handle which allows the caller to either finish the serialization by
+ * spilling to disk or to deserialize the partially-serialized block and reconstruct
* the original input iterator. The caller must either fully consume this result
* iterator or call `discard()` on it in order to free the storage memory consumed by the
* partially-unrolled block.