aboutsummaryrefslogtreecommitdiff
path: root/docs/job-scheduling.md
diff options
context:
space:
mode:
authorPierre Borckmans <pierre.borckmans@realimpactanalytics.com>2015-03-19 08:02:06 -0400
committerSean Owen <sowen@cloudera.com>2015-03-19 08:02:06 -0400
commit797f8a000773d848fa52c7fe2eb1b5e5e7f6c55a (patch)
tree31a2853f71d135d1b54ccc9d5f2a8e373273c926 /docs/job-scheduling.md
parent2c3f83c34bb8d2c1bf13b33633d8c5a8089545d1 (diff)
downloadspark-797f8a000773d848fa52c7fe2eb1b5e5e7f6c55a.tar.gz
spark-797f8a000773d848fa52c7fe2eb1b5e5e7f6c55a.tar.bz2
spark-797f8a000773d848fa52c7fe2eb1b5e5e7f6c55a.zip
[SPARK-6402][DOC] - Remove some refererences to shark in docs and ec2
EC2 script and job scheduling documentation still refered to Shark. I removed these references. I also removed a remaining `SHARK_VERSION` variable from `ec2-variables.sh`. Author: Pierre Borckmans <pierre.borckmans@realimpactanalytics.com> Closes #5083 from pierre-borckmans/remove_refererences_to_shark_in_docs and squashes the following commits: 4e90ffc [Pierre Borckmans] Removed deprecated SHARK_VERSION caea407 [Pierre Borckmans] Remove shark reference from ec2 script doc 196c744 [Pierre Borckmans] Removed references to Shark
Diffstat (limited to 'docs/job-scheduling.md')
-rw-r--r--docs/job-scheduling.md6
1 files changed, 2 insertions, 4 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 5295e351dd..963e88a3e1 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -14,8 +14,7 @@ runs an independent set of executor processes. The cluster managers that Spark r
facilities for [scheduling across applications](#scheduling-across-applications). Second,
_within_ each Spark application, multiple "jobs" (Spark actions) may be running concurrently
if they were submitted by different threads. This is common if your application is serving requests
-over the network; for example, the [Shark](http://shark.cs.berkeley.edu) server works this way. Spark
-includes a [fair scheduler](#scheduling-within-an-application) to schedule resources within each SparkContext.
+over the network. Spark includes a [fair scheduler](#scheduling-within-an-application) to schedule resources within each SparkContext.
# Scheduling Across Applications
@@ -52,8 +51,7 @@ an application to gain back cores on one node when it has work to do. To use thi
Note that none of the modes currently provide memory sharing across applications. If you would like to share
data this way, we recommend running a single server application that can serve multiple requests by querying
-the same RDDs. For example, the [Shark](http://shark.cs.berkeley.edu) JDBC server works this way for SQL
-queries. In future releases, in-memory storage systems such as [Tachyon](http://tachyon-project.org) will
+the same RDDs. In future releases, in-memory storage systems such as [Tachyon](http://tachyon-project.org) will
provide another approach to share RDDs.
## Dynamic Resource Allocation