aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2013-03-13 02:23:44 -0700
committerAndy Konwinski <andyk@berkeley.edu>2013-03-13 02:23:44 -0700
commitcf73fbd3054737d9f82fc0af9dc7f2667b37a4a0 (patch)
tree3cce49199738e5c0192695f514a61efcfe774350 /docs/quick-start.md
parentb63109763ba695725f8fd2d4078c2ff6e2134d19 (diff)
downloadspark-cf73fbd3054737d9f82fc0af9dc7f2667b37a4a0.tar.gz
spark-cf73fbd3054737d9f82fc0af9dc7f2667b37a4a0.tar.bz2
spark-cf73fbd3054737d9f82fc0af9dc7f2667b37a4a0.zip
Fix another broken link in quick start.
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index de304cdaff..216f7c9cc5 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -265,7 +265,7 @@ print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file.
Like in the Scala and Java examples, we use a SparkContext to create RDDs.
We can pass Python functions to Spark, which are automatically serialized along with any variables that they reference.
-For jobs that use custom classes or third-party libraries, we can add those code dependencies to SparkContext to ensure that they will be available on remote machines; this is described in more detail in the [Python programming guide](python-programming-guide).
+For jobs that use custom classes or third-party libraries, we can add those code dependencies to SparkContext to ensure that they will be available on remote machines; this is described in more detail in the [Python programming guide](python-programming-guide.html).
`SimpleJob` is simple enough that we do not need to specify any code dependencies.
We can run this job using the `pyspark` script: