diff options
author | Reynold Xin <rxin@apache.org> | 2014-12-06 00:36:33 +0000 |
---|---|---|
committer | Reynold Xin <rxin@apache.org> | 2014-12-06 00:36:33 +0000 |
commit | 0c7644bbab0d91e5cdb6e5a810dd22346118b750 (patch) | |
tree | 48368665740274f9637a579e588646aef490e4fd | |
parent | d09571a8e28b670a4924f4f73743464ec6f81f52 (diff) | |
download | spark-website-0c7644bbab0d91e5cdb6e5a810dd22346118b750.tar.gz spark-website-0c7644bbab0d91e5cdb6e5a810dd22346118b750.tar.bz2 spark-website-0c7644bbab0d91e5cdb6e5a810dd22346118b750.zip |
Updated FAQ html page
-rw-r--r-- | site/faq.html | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/site/faq.html b/site/faq.html index ba32b0965..845aa5990 100644 --- a/site/faq.html +++ b/site/faq.html @@ -178,7 +178,7 @@ Spark is a fast and general processing engine compatible with Hadoop data. It ca </p> <p class="question">How large a cluster can Spark scale to?</p> -<p class="answer">Many organizations run Spark on clusters with thousands of nodes.</p> +<p class="answer">Many organizations run Spark on clusters with thousands of nodes. The largest cluster we know has over 8000 nodes.</p> <p class="question">What happens if my dataset does not fit in memory?</p> <p class="answer">Often each partition of data is small and does fit in memory, and these partitions are processed a few at a time. For very large partitions that do not fit in memory, Spark's built-in operators perform external operations on datasets.</p> |