aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2012-10-08 17:14:53 -0700
committerAndy Konwinski <andyk@berkeley.edu>2012-10-08 17:17:17 -0700
commite1a724f39ca986fb3aee9619ca80c10878520f56 (patch)
tree2233308989fb40592f211e13026999e0dc178e46 /docs/quick-start.md
parent1231eb12e675fec47bc2d3139041b1c178a08c37 (diff)
downloadspark-e1a724f39ca986fb3aee9619ca80c10878520f56.tar.gz
spark-e1a724f39ca986fb3aee9619ca80c10878520f56.tar.bz2
spark-e1a724f39ca986fb3aee9619ca80c10878520f56.zip
Updating lots of docs to use the new special version number variables,
also adding the version to the navbar so it is easy to tell which version of Spark these docs were compiled for.
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md10
1 files changed, 5 insertions, 5 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index d28e788239..51e60426b5 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -119,7 +119,7 @@ import SparkContext._
object SimpleJob extends Application {
val logFile = "/var/log/syslog" // Should be some log file on your system
val sc = new SparkContext("local", "Simple Job", "$YOUR_SPARK_HOME",
- "target/scala-2.9.2/simple-project_2.9.2-1.0.jar")
+ "target/scala-{{site.SCALA_VERSION}}/simple-project_{{site.SCALA_VERSION}}-1.0.jar")
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
@@ -136,9 +136,9 @@ name := "Simple Project"
version := "1.0"
-scalaVersion := "2.9.2"
+scalaVersion := "{{site.SCALA_VERSION}}"
-libraryDependencies += "org.spark-project" %% "spark-core" % "0.6.0-SNAPSHOT"
+libraryDependencies += "org.spark-project" %% "spark-core" % "{{site.SPARK_VERSION}}"
{% endhighlight %}
Of course, for sbt to work correctly, we'll need to layout `SimpleJob.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can create a jar package containing the job's code, then use `sbt run` to execute our example job.
@@ -209,8 +209,8 @@ Our Maven `pom.xml` file will list Spark as a dependency. Note that Spark artifa
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.spark-project</groupId>
- <artifactId>spark-core_2.9.2</artifactId>
- <version>0.6.0-SNAPSHOT</version>
+ <artifactId>spark-core_{{site.SCALA_VERSION}}</artifactId>
+ <version>{{site.SPARK_VERSION}}</version>
</dependency>
</dependencies>
</project>