aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2014-05-19 15:02:35 -0700
committerTathagata Das <tathagata.das1565@gmail.com>2014-05-19 15:02:35 -0700
commit7b70a7071894dd90ea1d0091542b3e13e7ef8d3a (patch)
treee24b0a208b0c2290e6f1b6a6beda520f36ed1fa3 /README.md
parentdf0aa8353ab6d3b19d838c6fa95a93a64948309f (diff)
downloadspark-7b70a7071894dd90ea1d0091542b3e13e7ef8d3a.tar.gz
spark-7b70a7071894dd90ea1d0091542b3e13e7ef8d3a.tar.bz2
spark-7b70a7071894dd90ea1d0091542b3e13e7ef8d3a.zip
[SPARK-1876] Windows fixes to deal with latest distribution layout changes
- Look for JARs in the right place - Launch examples the same way as on Unix - Load datanucleus JARs if they exist - Don't attempt to parse local paths as URIs in SparkSubmit, since paths with C:\ are not valid URIs - Also fixed POM exclusion rules for datanucleus (it wasn't properly excluding it, whereas SBT was) Author: Matei Zaharia <matei@databricks.com> Closes #819 from mateiz/win-fixes and squashes the following commits: d558f96 [Matei Zaharia] Fix comment 228577b [Matei Zaharia] Review comments d3b71c7 [Matei Zaharia] Properly exclude datanucleus files in Maven assembly 144af84 [Matei Zaharia] Update Windows scripts to match latest binary package layout
Diffstat (limited to 'README.md')
-rw-r--r--README.md7
1 files changed, 4 insertions, 3 deletions
diff --git a/README.md b/README.md
index 9c2e32b90f..6211a5889a 100644
--- a/README.md
+++ b/README.md
@@ -9,13 +9,14 @@ You can find the latest Spark documentation, including a programming
guide, on the project webpage at <http://spark.apache.org/documentation.html>.
This README file only contains basic setup instructions.
-
## Building Spark
Spark is built on Scala 2.10. To build Spark and its example programs, run:
./sbt/sbt assembly
+(You do not need to do this if you downloaded a pre-built package.)
+
## Interactive Scala Shell
The easiest way to start using Spark is through the Scala shell:
@@ -41,9 +42,9 @@ And run the following command, which should also return 1000:
Spark also comes with several sample programs in the `examples` directory.
To run one of them, use `./bin/run-example <class> [params]`. For example:
- ./bin/run-example org.apache.spark.examples.SparkLR
+ ./bin/run-example SparkPi
-will run the Logistic Regression example locally.
+will run the Pi example locally.
You can set the MASTER environment variable when running examples to submit
examples to a cluster. This can be a mesos:// or spark:// URL,