summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMatei Alexandru Zaharia <matei@apache.org>2014-07-18 20:55:11 +0000
committerMatei Alexandru Zaharia <matei@apache.org>2014-07-18 20:55:11 +0000
commit7427588a2716434f4c6478dc6513c097896a2bf0 (patch)
treed5e5c8bf56b8b6ccd9badd706cac622e638d7eff
parentd6593a8afc64c77416ed6cbe6540b4b755f4d0e9 (diff)
downloadspark-website-7427588a2716434f4c6478dc6513c097896a2bf0.tar.gz
spark-website-7427588a2716434f4c6478dc6513c097896a2bf0.tar.bz2
spark-website-7427588a2716434f4c6478dc6513c097896a2bf0.zip
tweak
-rw-r--r--faq.md2
-rw-r--r--site/faq.html2
2 files changed, 2 insertions, 2 deletions
diff --git a/faq.md b/faq.md
index 4aa96dcf7..b9131b4e5 100644
--- a/faq.md
+++ b/faq.md
@@ -23,7 +23,7 @@ streaming, interactive queries, and machine learning.
<p class="answer">Spark supports Scala, Java and Python.</p>
<p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
+<p class="answer">We have seen multiple deployments on over 1000 nodes.</p>
<p class="question">What happens when a cached dataset does not fit in memory?</p>
<p class="answer">Spark can either spill it to disk or recompute the partitions that don't fit in RAM each time they are requested. By default, it uses recomputation, but you can set a dataset's <a href="{{site.url}}docs/latest/scala-programming-guide.html#rdd-persistence">storage level</a> to <code>MEMORY_AND_DISK</code> to avoid this. </p>
diff --git a/site/faq.html b/site/faq.html
index 749260635..e5c76be08 100644
--- a/site/faq.html
+++ b/site/faq.html
@@ -174,7 +174,7 @@ streaming, interactive queries, and machine learning.
<p class="answer">Spark supports Scala, Java and Python.</p>
<p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
+<p class="answer">We have seen multiple deployments on over 1000 nodes.</p>
<p class="question">What happens when a cached dataset does not fit in memory?</p>
<p class="answer">Spark can either spill it to disk or recompute the partitions that don't fit in RAM each time they are requested. By default, it uses recomputation, but you can set a dataset's <a href="/docs/latest/scala-programming-guide.html#rdd-persistence">storage level</a> to <code>MEMORY_AND_DISK</code> to avoid this. </p>