summaryrefslogtreecommitdiff
path: root/examples.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@apache.org>2013-09-28 23:44:45 +0000
committerPatrick Wendell <pwendell@apache.org>2013-09-28 23:44:45 +0000
commit1084e2c734bbe813457b9288b99880eb3e0a0e1a (patch)
treed7b88982918aa98dd19044f45c72ca2a6b51a2b8 /examples.md
parent08d4f7700212b606b6e01bfb97ed25fa74e831b3 (diff)
downloadspark-website-1084e2c734bbe813457b9288b99880eb3e0a0e1a.tar.gz
spark-website-1084e2c734bbe813457b9288b99880eb3e0a0e1a.tar.bz2
spark-website-1084e2c734bbe813457b9288b99880eb3e0a0e1a.zip
Make examples from GitHub more prominent
Diffstat (limited to 'examples.md')
-rw-r--r--examples.md5
1 files changed, 4 insertions, 1 deletions
diff --git a/examples.md b/examples.md
index 36c8415e6..783643354 100644
--- a/examples.md
+++ b/examples.md
@@ -8,7 +8,10 @@ navigation:
---
<h2>Spark Examples</h2>
-Spark is built around <em>distributed datasets</em> that support types of parallel operations: transformations, which are lazy and yield another distributed dataset (e.g., <code>map</code>, <code>filter</code>, and <code>join</code>), and actions, which force the computation of a dataset and return a result (e.g., <code>count</code>). The following examples show off some of the available operations and features. Several additional examples are distributed with Spark, both for core Spark ([Scala examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/org/apache/spark/examples), [Java examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/examples), [Python examples](https://github.com/apache/incubator-spark/tree/master/python/examples)) and streaming Spark ([Scala examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples), [Java examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/streaming/examples)).
+Spark is built around <em>distributed datasets</em> that support types of parallel operations: transformations, which are lazy and yield another distributed dataset (e.g., <code>map</code>, <code>filter</code>, and <code>join</code>), and actions, which force the computation of a dataset and return a result (e.g., <code>count</code>). The following examples show off some of the available operations and features. Several additional examples are distributed with Spark:
+
+ * Core Spark: [Scala examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/org/apache/spark/examples), [Java examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/examples), [Python examples](https://github.com/apache/incubator-spark/tree/master/python/examples)
+ * Streaming Spark: [Scala examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples), [Java examples](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/streaming/examples)
<h3>Text Search</h3>