aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorPrashant Sharma <prashant.s@imaginea.com>2014-01-02 18:41:21 +0530
committerPrashant Sharma <prashant.s@imaginea.com>2014-01-02 18:41:21 +0530
commit94b7a7fe37a4b1459bfdbece2a4162451d6a8ac2 (patch)
treebebd8917d475fdc08e1e3e583be435562e9c4415 /README.md
parentb810a85cdddb247e1a104f4daad905b97222ad85 (diff)
downloadspark-94b7a7fe37a4b1459bfdbece2a4162451d6a8ac2.tar.gz
spark-94b7a7fe37a4b1459bfdbece2a4162451d6a8ac2.tar.bz2
spark-94b7a7fe37a4b1459bfdbece2a4162451d6a8ac2.zip
run-example -> bin/run-example
Diffstat (limited to 'README.md')
-rw-r--r--README.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/README.md b/README.md
index 170e964851..7154165ab1 100644
--- a/README.md
+++ b/README.md
@@ -24,9 +24,9 @@ Once you've built Spark, the easiest way to start using it is the shell:
Or, for the Python API, the Python shell (`./pyspark`).
Spark also comes with several sample programs in the `examples` directory.
-To run one of them, use `./run-example <class> <params>`. For example:
+To run one of them, use `./bin/run-example <class> <params>`. For example:
- ./run-example org.apache.spark.examples.SparkLR local[2]
+ ./bin/run-example org.apache.spark.examples.SparkLR local[2]
will run the Logistic Regression example locally on 2 CPUs.