aboutsummaryrefslogtreecommitdiff
path: root/docs/index.md
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@eecs.berkeley.edu>2013-01-01 13:52:14 -0800
committerJosh Rosen <joshrosen@eecs.berkeley.edu>2013-01-01 13:52:14 -0800
commit170e451fbdd308ae77065bd9c0f2bd278abf0cb7 (patch)
treeda3df59e2262dac4b381227d5bc712502249d746 /docs/index.md
parent6f6a6b79c4c3f3555f8ff427c91e714d02afe8fa (diff)
downloadspark-170e451fbdd308ae77065bd9c0f2bd278abf0cb7.tar.gz
spark-170e451fbdd308ae77065bd9c0f2bd278abf0cb7.tar.bz2
spark-170e451fbdd308ae77065bd9c0f2bd278abf0cb7.zip
Minor documentation and style fixes for PySpark.
Diffstat (limited to 'docs/index.md')
-rw-r--r--docs/index.md8
1 files changed, 6 insertions, 2 deletions
diff --git a/docs/index.md b/docs/index.md
index 33ab58a962..848b585333 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -8,7 +8,7 @@ TODO(andyk): Rewrite to make the Java API a first class part of the story.
{% endcomment %}
Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an interpreter.
-It provides clean, language-integrated APIs in Scala, Java, and Python, with a rich array of parallel operators.
+It provides clean, language-integrated APIs in [Scala](scala-programming-guide.html), [Java](java-programming-guide.html), and [Python](python-programming-guide.html), with a rich array of parallel operators.
Spark can run on top of the [Apache Mesos](http://incubator.apache.org/mesos/) cluster manager,
[Hadoop YARN](http://hadoop.apache.org/docs/r2.0.1-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html),
Amazon EC2, or without an independent resource manager ("standalone mode").
@@ -61,6 +61,11 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
* [Java Programming Guide](java-programming-guide.html): using Spark from Java
* [Python Programming Guide](python-programming-guide.html): using Spark from Python
+**API Docs:**
+
+* [Java/Scala (Scaladoc)](api/core/index.html)
+* [Python (Epydoc)](api/pyspark/index.html)
+
**Deployment guides:**
* [Running Spark on Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
@@ -73,7 +78,6 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
* [Configuration](configuration.html): customize Spark via its configuration system
* [Tuning Guide](tuning.html): best practices to optimize performance and memory use
-* API Docs: [Java/Scala (Scaladoc)](api/core/index.html) and [Python (Epydoc)](api/pyspark/index.html)
* [Bagel](bagel-programming-guide.html): an implementation of Google's Pregel on Spark
* [Contributing to Spark](contributing-to-spark.html)