diff options
author | Tathagata Das <tathagata.das1565@gmail.com> | 2013-01-15 12:08:51 -0800 |
---|---|---|
committer | Tathagata Das <tathagata.das1565@gmail.com> | 2013-01-15 12:08:51 -0800 |
commit | cd1521cfdb3c9dd2bf8ced8907afbbbf33893804 (patch) | |
tree | 76fce28a2fca3fcfbbc3a7f4c7b0fe82cfc695c7 /docs/index.md | |
parent | 1638fcb0dce296da22ffc90127d5148a8fab745e (diff) | |
parent | cb867e9ffb2c5e3d65d50c222fcce3631b94e4dd (diff) | |
download | spark-cd1521cfdb3c9dd2bf8ced8907afbbbf33893804.tar.gz spark-cd1521cfdb3c9dd2bf8ced8907afbbbf33893804.tar.bz2 spark-cd1521cfdb3c9dd2bf8ced8907afbbbf33893804.zip |
Merge branch 'master' into streaming
Conflicts:
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
docs/_layouts/global.html
docs/index.md
run
Diffstat (limited to 'docs/index.md')
-rw-r--r-- | docs/index.md | 18 |
1 files changed, 12 insertions, 6 deletions
diff --git a/docs/index.md b/docs/index.md index 560811ade8..c6ef507cb0 100644 --- a/docs/index.md +++ b/docs/index.md @@ -7,11 +7,11 @@ title: Spark Overview TODO(andyk): Rewrite to make the Java API a first class part of the story. {% endcomment %} -Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an -interpreter. It provides clean, language-integrated APIs in Scala and Java, with a rich array of parallel operators. Spark can -run on top of the [Apache Mesos](http://incubator.apache.org/mesos/) cluster manager, +Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an interpreter. +It provides clean, language-integrated APIs in [Scala](scala-programming-guide.html), [Java](java-programming-guide.html), and [Python](python-programming-guide.html), with a rich array of parallel operators. +Spark can run on top of the [Apache Mesos](http://incubator.apache.org/mesos/) cluster manager, [Hadoop YARN](http://hadoop.apache.org/docs/r2.0.1-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html), -Amazon EC2, or without an independent resource manager ("standalone mode"). +Amazon EC2, or without an independent resource manager ("standalone mode"). # Downloading @@ -58,8 +58,15 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`). * [Quick Start](quick-start.html): a quick introduction to the Spark API; start here! * [Spark Programming Guide](scala-programming-guide.html): an overview of Spark concepts, and details on the Scala API +* [Streaming Programming Guide](streaming-programming-guide.html): an API preview of Spark Streaming * [Java Programming Guide](java-programming-guide.html): using Spark from Java -* [Streaming Guide](streaming-programming-guide.html): an API preview of Spark Streaming +* [Python Programming Guide](python-programming-guide.html): using Spark from Python + +**API Docs:** + +* [Spark Java/Scala (Scaladoc)](api/core/index.html) +* [Spark Python (Epydoc)](api/pyspark/index.html) +* [Spark Streaming Java/Scala (Scaladoc)](api/streaming/index.html) **Deployment guides:** @@ -73,7 +80,6 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`). * [Configuration](configuration.html): customize Spark via its configuration system * [Tuning Guide](tuning.html): best practices to optimize performance and memory use -* [API Docs (Scaladoc)](api/core/index.html) * [Bagel](bagel-programming-guide.html): an implementation of Google's Pregel on Spark * [Contributing to Spark](contributing-to-spark.html) |