aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2014-05-19 15:02:35 -0700
committerTathagata Das <tathagata.das1565@gmail.com>2014-05-19 15:02:52 -0700
commit111c121ae97730fa8d87db7f0d17e10879fa76ab (patch)
tree8bfdcace34ab9d4884f9477be654982c5a94a2ba /README.md
parentecab8a239dcbb889181c572317581d1c8b627201 (diff)
downloadspark-111c121ae97730fa8d87db7f0d17e10879fa76ab.tar.gz
spark-111c121ae97730fa8d87db7f0d17e10879fa76ab.tar.bz2
spark-111c121ae97730fa8d87db7f0d17e10879fa76ab.zip
[SPARK-1876] Windows fixes to deal with latest distribution layout changes
- Look for JARs in the right place - Launch examples the same way as on Unix - Load datanucleus JARs if they exist - Don't attempt to parse local paths as URIs in SparkSubmit, since paths with C:\ are not valid URIs - Also fixed POM exclusion rules for datanucleus (it wasn't properly excluding it, whereas SBT was) Author: Matei Zaharia <matei@databricks.com> Closes #819 from mateiz/win-fixes and squashes the following commits: d558f96 [Matei Zaharia] Fix comment 228577b [Matei Zaharia] Review comments d3b71c7 [Matei Zaharia] Properly exclude datanucleus files in Maven assembly 144af84 [Matei Zaharia] Update Windows scripts to match latest binary package layout (cherry picked from commit 7b70a7071894dd90ea1d0091542b3e13e7ef8d3a) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
Diffstat (limited to 'README.md')
-rw-r--r--README.md7
1 files changed, 4 insertions, 3 deletions
diff --git a/README.md b/README.md
index 9c2e32b90f..6211a5889a 100644
--- a/README.md
+++ b/README.md
@@ -9,13 +9,14 @@ You can find the latest Spark documentation, including a programming
guide, on the project webpage at <http://spark.apache.org/documentation.html>.
This README file only contains basic setup instructions.
-
## Building Spark
Spark is built on Scala 2.10. To build Spark and its example programs, run:
./sbt/sbt assembly
+(You do not need to do this if you downloaded a pre-built package.)
+
## Interactive Scala Shell
The easiest way to start using Spark is through the Scala shell:
@@ -41,9 +42,9 @@ And run the following command, which should also return 1000:
Spark also comes with several sample programs in the `examples` directory.
To run one of them, use `./bin/run-example <class> [params]`. For example:
- ./bin/run-example org.apache.spark.examples.SparkLR
+ ./bin/run-example SparkPi
-will run the Logistic Regression example locally.
+will run the Pi example locally.
You can set the MASTER environment variable when running examples to submit
examples to a cluster. This can be a mesos:// or spark:// URL,