aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@gmail.com>2012-10-09 21:52:49 -0700
committerPatrick Wendell <pwendell@gmail.com>2012-10-09 22:39:29 -0700
commit4de5cc1ad43cc50b8610913f60916899a7fd75ad (patch)
tree0a811a23764707abea636f0f1fffd13c2eed4018 /docs/quick-start.md
parent8321e7f0c2d95f7b382293a4208dbf8cd2fe7809 (diff)
downloadspark-4de5cc1ad43cc50b8610913f60916899a7fd75ad.tar.gz
spark-4de5cc1ad43cc50b8610913f60916899a7fd75ad.tar.bz2
spark-4de5cc1ad43cc50b8610913f60916899a7fd75ad.zip
Removing reference to publish-local in the quickstart
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md15
1 files changed, 4 insertions, 11 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 7d35fb01bb..5625fc2ddf 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -101,13 +101,9 @@ res9: Long = 15
It may seem silly to use a Spark to explore and cache a 30-line text file. The interesting part is that these same functions can be used on very large data sets, even when they are striped across tens or hundreds of nodes. You can also do this interactively by connecting `spark-shell` to a cluster, as described in the [programming guide](scala-programming-guide.html#initializing-spark).
# A Standalone Job in Scala
-Now say we wanted to write a standalone job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you using other build systems, please reference the Spark assembly JAR in the developer guide. The first step is to publish Spark to our local Ivy/Maven repositories. From the Spark directory:
+Now say we wanted to write a standalone job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you using other build systems, consider using the Spark assembly JAR described in the developer guide.
-{% highlight bash %}
-$ sbt/sbt publish-local
-{% endhighlight %}
-
-Next, we'll create a very simple Spark job in Scala. So simple, in fact, that it's named `SimpleJob.scala`:
+We'll create a very simple Spark job in Scala. So simple, in fact, that it's named `SimpleJob.scala`:
{% highlight scala %}
/*** SimpleJob.scala ***/
@@ -159,12 +155,9 @@ Lines with a: 8422, Lines with b: 1836
This example only runs the job locally; for a tutorial on running jobs across several machines, see the [Standalone Mode](spark-standalone.html) documentation, and consider using a distributed input source, such as HDFS.
# A Standalone Job In Java
-Now say we wanted to write a standalone job using the Java API. We will walk through doing this with Maven. If you using other build systems, please reference the Spark assembly JAR in the developer guide. The first step is to publish Spark to our local Ivy/Maven repositories. From the Spark directory:
+Now say we wanted to write a standalone job using the Java API. We will walk through doing this with Maven. If you using other build systems, consider using the Spark assembly JAR described in the developer guide.
-{% highlight bash %}
-$ sbt/sbt publish-local
-{% endhighlight %}
-Next, we'll create a very simple Spark job, `SimpleJob.java`:
+We'll create a very simple Spark job, `SimpleJob.java`:
{% highlight java %}
/*** SimpleJob.java ***/