aboutsummaryrefslogtreecommitdiff
path: root/docs/scala-programming-guide.md
diff options
context:
space:
mode:
authorHolden Karau <holden@pigscanfly.ca>2014-01-05 22:05:30 -0800
committerHolden Karau <holden@pigscanfly.ca>2014-01-05 22:05:30 -0800
commitd86dc74d796121b61ff43c632791c52dd49ff8ad (patch)
treeb04601ff15a651093d3a00e54c8f0e4630c72505 /docs/scala-programming-guide.md
parentdf92f1c0254dc9073c18bc7b76f8b9523ecd7cec (diff)
downloadspark-d86dc74d796121b61ff43c632791c52dd49ff8ad.tar.gz
spark-d86dc74d796121b61ff43c632791c52dd49ff8ad.tar.bz2
spark-d86dc74d796121b61ff43c632791c52dd49ff8ad.zip
Code review feedback
Diffstat (limited to 'docs/scala-programming-guide.md')
-rw-r--r--docs/scala-programming-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index 3d0e8923d5..c1ef46a1cd 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -31,7 +31,7 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency
artifactId = hadoop-client
version = <your-hdfs-version>
-For other build systems, you can run `sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions).
+For other build systems, you can run `sbt/sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions).
Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines: