aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
authorLiwei Lin <lwlin7@gmail.com>2016-04-02 17:55:46 -0700
committerReynold Xin <rxin@databricks.com>2016-04-02 17:55:46 -0700
commit03d130f9734be66e8aefc4ffaa207ee13e837629 (patch)
treef2d836f861f2f24d813c60023bb1efe6d5cfcb5b /docs/streaming-programming-guide.md
parent4a6e78abd9d5edc4a5092738dff0006bbe202a89 (diff)
downloadspark-03d130f9734be66e8aefc4ffaa207ee13e837629.tar.gz
spark-03d130f9734be66e8aefc4ffaa207ee13e837629.tar.bz2
spark-03d130f9734be66e8aefc4ffaa207ee13e837629.zip
[SPARK-14342][CORE][DOCS][TESTS] Remove straggler references to Tachyon
## What changes were proposed in this pull request? Straggler references to Tachyon were removed: - for docs, `tachyon` has been generalized as `off-heap memory`; - for Mesos test suits, the key-value `tachyon:true`/`tachyon:false` has been changed to `os:centos`/`os:ubuntu`, since `os` is an example constrain used by the [Mesos official docs](http://mesos.apache.org/documentation/attributes-resources/). ## How was this patch tested? Existing test suites. Author: Liwei Lin <lwlin7@gmail.com> Closes #12129 from lw-lin/tachyon-cleanup.
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 8d21917a7d..7f6c0ed699 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -2178,7 +2178,7 @@ overall processing throughput of the system, its use is still recommended to ach
consistent batch processing times. Make sure you set the CMS GC on both the driver (using `--driver-java-options` in `spark-submit`) and the executors (using [Spark configuration](configuration.html#runtime-environment) `spark.executor.extraJavaOptions`).
* **Other tips**: To further reduce GC overheads, here are some more tips to try.
- - Use Tachyon for off-heap storage of persisted RDDs. See more detail in the [Spark Programming Guide](programming-guide.html#rdd-persistence).
+ - Persist RDDs using the `OFF_HEAP` storage level. See more detail in the [Spark Programming Guide](programming-guide.html#rdd-persistence).
- Use more executors with smaller heap sizes. This will reduce the GC pressure within each JVM heap.