aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2012-09-12 16:05:19 -0700
committerAndy Konwinski <andyk@berkeley.edu>2012-09-12 16:06:18 -0700
commit4d3a17c8d768a4e76bfb895ce53715434447cb62 (patch)
tree35d92aab36165b3ec68209622c260ebb9e3e9147 /docs/running-on-mesos.md
parent49e98500a9b1f93ab3224c4358dbc56f1e37ff35 (diff)
downloadspark-4d3a17c8d768a4e76bfb895ce53715434447cb62.tar.gz
spark-4d3a17c8d768a4e76bfb895ce53715434447cb62.tar.bz2
spark-4d3a17c8d768a4e76bfb895ce53715434447cb62.zip
Fixing lots of broken links.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md14
1 files changed, 7 insertions, 7 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index b6bfff9da3..9807228121 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -4,12 +4,12 @@ title: Running Spark on Mesos
---
# Running Spark on Mesos
-To run on a cluster, Spark uses the [[Apache Mesos|http://incubator.apache.org/mesos/]] resource manager. Follow the steps below to install Mesos and Spark:
+To run on a cluster, Spark uses the [Apache Mesos](http://incubator.apache.org/mesos/) resource manager. Follow the steps below to install Mesos and Spark:
### For Spark 0.5:
-1. Download and build Spark using the instructions [[here|Home]].
-2. Download Mesos 0.9.0 from a [[mirror|http://www.apache.org/dyn/closer.cgi/incubator/mesos/mesos-0.9.0-incubating/]].
+1. Download and build Spark using the instructions [here]({{ HOME_DIR }}Home).
+2. Download Mesos 0.9.0 from a [mirror](http://www.apache.org/dyn/closer.cgi/incubator/mesos/mesos-0.9.0-incubating/).
3. Configure Mesos using the `configure` script, passing the location of your `JAVA_HOME` using `--with-java-home`. Mesos comes with "template" configure scripts for different platforms, such as `configure.macosx`, that you can run. See the README file in Mesos for other options. **Note:** If you want to run Mesos without installing it into the default paths on your system (e.g. if you don't have administrative privileges to install it), you should also pass the `--prefix` option to `configure` to tell it where to install. For example, pass `--prefix=/home/user/mesos`. By default the prefix is `/usr/local`.
4. Build Mesos using `make`, and then install it using `make install`.
5. Create a file called `spark-env.sh` in Spark's `conf` directory, by copying `conf/spark-env.sh.template`, and add the following lines in it:
@@ -26,7 +26,7 @@ To run on a cluster, Spark uses the [[Apache Mesos|http://incubator.apache.org/m
### For Spark versions before 0.5:
-1. Download and build Spark using the instructions [[here|Home]].
+1. Download and build Spark using the instructions [here]({{ HOME_DIR }}Home).
2. Download either revision 1205738 of Mesos if you're using the master branch of Spark, or the pre-protobuf branch of Mesos if you're using Spark 0.3 or earlier (note that for new users, _we recommend the master branch instead of 0.3_). For revision 1205738 of Mesos, use:
<pre>
svn checkout -r 1205738 http://svn.apache.org/repos/asf/incubator/mesos/trunk mesos
@@ -35,20 +35,20 @@ For the pre-protobuf branch (for Spark 0.3 and earlier), use:
<pre>git clone git://github.com/mesos/mesos
cd mesos
git checkout --track origin/pre-protobuf</pre>
-3. Configure Mesos using the `configure` script, passing the location of your `JAVA_HOME` using `--with-java-home`. Mesos comes with "template" configure scripts for different platforms, such as `configure.template.macosx`, so you can just run the one on your platform if it exists. See the [[Mesos wiki|https://github.com/mesos/mesos/wiki]] for other configuration options.
+3. Configure Mesos using the `configure` script, passing the location of your `JAVA_HOME` using `--with-java-home`. Mesos comes with "template" configure scripts for different platforms, such as `configure.template.macosx`, so you can just run the one on your platform if it exists. See the [Mesos wiki](https://github.com/mesos/mesos/wiki) for other configuration options.
4. Build Mesos using `make`.
5. In Spark's `conf/spark-env.sh` file, add `export MESOS_HOME=<path to Mesos directory>`. If you don't have a `spark-env.sh`, copy `conf/spark-env.sh.template`. You should also set `SCALA_HOME` there if it's not on your system's default path.
6. Copy Spark and Mesos to the _same_ path on all the nodes in the cluster.
7. Configure Mesos for deployment:
* On your master node, edit `MESOS_HOME/conf/masters` to list your master and `MESOS_HOME/conf/slaves` to list the slaves. Also, edit `MESOS_HOME/conf/mesos.conf` and add the line `failover_timeout=1` to change a timeout parameter that is too high by default.
* Run `MESOS_HOME/deploy/start-mesos` to start it up. If all goes well, you should see Mesos's web UI on port 8080 of the master machine.
- * See Mesos's [[deploy instructions|https://github.com/mesos/mesos/wiki/Deploy-Scripts]] for more information on deploying it.
+ * See Mesos's [deploy instructions](https://github.com/mesos/mesos/wiki/Deploy-Scripts) for more information on deploying it.
8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `master@HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example:
<pre>new SparkContext("master@HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))</pre>
## Running on Amazon EC2
-If you want to run Spark on Amazon EC2, there's an easy way to launch a cluster with Mesos, Spark, and HDFS pre-configured: the [[EC2 launch scripts|Running-Spark-on-Amazon-EC2]]. This will get you a cluster in about five minutes without any configuration on your part.
+If you want to run Spark on Amazon EC2, there's an easy way to launch a cluster with Mesos, Spark, and HDFS pre-configured: the [EC2 launch scripts]({{HOME_PATH}}running-on-amazon-ec2.html). This will get you a cluster in about five minutes without any configuration on your part.
## Running Alongside Hadoop