aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorPrabeesh K <prabsmails@gmail.com>2014-04-04 21:32:00 -0700
committerReynold Xin <rxin@apache.org>2014-04-04 21:32:00 -0700
commit0acc7a02b4323f4e0b7736bc1999bdcedab41f39 (patch)
treec2ec84344a9aa1ea0f412983612e675c08ea88ef /docs/quick-start.md
parent8de038eb366ded2ac74f72517e40545dbbab8cdd (diff)
downloadspark-0acc7a02b4323f4e0b7736bc1999bdcedab41f39.tar.gz
spark-0acc7a02b4323f4e0b7736bc1999bdcedab41f39.tar.bz2
spark-0acc7a02b4323f4e0b7736bc1999bdcedab41f39.zip
small fix ( proogram -> program )
Author: Prabeesh K <prabsmails@gmail.com> Closes #331 from prabeesh/patch-3 and squashes the following commits: 9399eb5 [Prabeesh K] small fix(proogram -> program)
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 13df6beea1..60e8b1ba0e 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -124,7 +124,7 @@ object SimpleApp {
}
{% endhighlight %}
-This program just counts the number of lines containing 'a' and the number containing 'b' in the Spark README. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext, we initialize a SparkContext as part of the proogram. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the application, the directory where Spark is installed, and a name for the jar file containing the application's code. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes.
+This program just counts the number of lines containing 'a' and the number containing 'b' in the Spark README. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext, we initialize a SparkContext as part of the program. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the application, the directory where Spark is installed, and a name for the jar file containing the application's code. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes.
This file depends on the Spark API, so we'll also include an sbt configuration file, `simple.sbt` which explains that Spark is a dependency. This file also adds a repository that Spark depends on: