aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-10-09 09:46:20 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-10-09 09:46:20 -0700
commit4780fee887b98e1bf07a21f6152c83bd03378f66 (patch)
tree904b90dc62cb541d77a5baf161d8d57e4a04a1c9 /docs/quick-start.md
parentd3b8252050f3293b1a9d292fd9d37ddb68a01124 (diff)
parente1a724f39ca986fb3aee9619ca80c10878520f56 (diff)
downloadspark-4780fee887b98e1bf07a21f6152c83bd03378f66.tar.gz
spark-4780fee887b98e1bf07a21f6152c83bd03378f66.tar.bz2
spark-4780fee887b98e1bf07a21f6152c83bd03378f66.zip
Merge pull request #260 from andyk/update-docs-to-use-version-vars
Updates docs to use the new version num vars and adds Spark version in nav bar
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md10
1 files changed, 5 insertions, 5 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index d28e788239..51e60426b5 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -119,7 +119,7 @@ import SparkContext._
object SimpleJob extends Application {
val logFile = "/var/log/syslog" // Should be some log file on your system
val sc = new SparkContext("local", "Simple Job", "$YOUR_SPARK_HOME",
- "target/scala-2.9.2/simple-project_2.9.2-1.0.jar")
+ "target/scala-{{site.SCALA_VERSION}}/simple-project_{{site.SCALA_VERSION}}-1.0.jar")
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
@@ -136,9 +136,9 @@ name := "Simple Project"
version := "1.0"
-scalaVersion := "2.9.2"
+scalaVersion := "{{site.SCALA_VERSION}}"
-libraryDependencies += "org.spark-project" %% "spark-core" % "0.6.0-SNAPSHOT"
+libraryDependencies += "org.spark-project" %% "spark-core" % "{{site.SPARK_VERSION}}"
{% endhighlight %}
Of course, for sbt to work correctly, we'll need to layout `SimpleJob.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can create a jar package containing the job's code, then use `sbt run` to execute our example job.
@@ -209,8 +209,8 @@ Our Maven `pom.xml` file will list Spark as a dependency. Note that Spark artifa
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.spark-project</groupId>
- <artifactId>spark-core_2.9.2</artifactId>
- <version>0.6.0-SNAPSHOT</version>
+ <artifactId>spark-core_{{site.SCALA_VERSION}}</artifactId>
+ <version>{{site.SPARK_VERSION}}</version>
</dependency>
</dependencies>
</project>