aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@eecs.berkeley.edu>2013-01-01 21:25:49 -0800
committerJosh Rosen <joshrosen@eecs.berkeley.edu>2013-01-01 21:25:49 -0800
commitce9f1bbe20eff794cd1d588dc88f109d32588cfe (patch)
treeff840eea62e8314dc4cefcaa08534c4b21e544ba /docs/quick-start.md
parentb58340dbd9a741331fc4c3829b08c093560056c2 (diff)
downloadspark-ce9f1bbe20eff794cd1d588dc88f109d32588cfe.tar.gz
spark-ce9f1bbe20eff794cd1d588dc88f109d32588cfe.tar.bz2
spark-ce9f1bbe20eff794cd1d588dc88f109d32588cfe.zip
Add `pyspark` script to replace the other scripts.
Expand the PySpark programming guide.
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 8c25df5486..2c7cfbed25 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -258,11 +258,11 @@ We can pass Python functions to Spark, which are automatically serialized along
For jobs that use custom classes or third-party libraries, we can add those code dependencies to SparkContext to ensure that they will be available on remote machines; this is described in more detail in the [Python programming guide](python-programming-guide).
`SimpleJob` is simple enough that we do not need to specify any code dependencies.
-We can run this job using the `run-pyspark` script in `$SPARK_HOME/pyspark`:
+We can run this job using the `pyspark` script:
{% highlight python %}
$ cd $SPARK_HOME
-$ ./pyspark/run-pyspark SimpleJob.py
+$ ./pyspark SimpleJob.py
...
Lines with a: 8422, Lines with b: 1836
{% endhighlight python %}