diff options
author | Prashant Sharma <scrapcodes@gmail.com> | 2014-01-02 14:09:37 +0530 |
---|---|---|
committer | Prashant Sharma <scrapcodes@gmail.com> | 2014-01-02 14:09:37 +0530 |
commit | 6be4c1119493dea2af9734ad8b59fcded31f2676 (patch) | |
tree | 5005141392dfacd0f4afb8cb9f463668a3900287 /docs/scala-programming-guide.md | |
parent | 8821c3a5262d6893d2a1fd6ed86afd1213114b4d (diff) | |
download | spark-6be4c1119493dea2af9734ad8b59fcded31f2676.tar.gz spark-6be4c1119493dea2af9734ad8b59fcded31f2676.tar.bz2 spark-6be4c1119493dea2af9734ad8b59fcded31f2676.zip |
Removed sbt folder and changed docs accordingly
Diffstat (limited to 'docs/scala-programming-guide.md')
-rw-r--r-- | docs/scala-programming-guide.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md index 56d2a3a4a0..3e7075c382 100644 --- a/docs/scala-programming-guide.md +++ b/docs/scala-programming-guide.md @@ -31,7 +31,7 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency artifactId = hadoop-client version = <your-hdfs-version> -For other build systems, you can run `sbt/sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions). +For other build systems, you can run `sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions). Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines: |