aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2012-09-12 23:05:47 -0700
committerAndy Konwinski <andyk@berkeley.edu>2012-09-12 23:25:07 -0700
commitca2c999e0fd97a29b20bd3990b6e57d9e0db5d0a (patch)
treef90eb6c5cd2bc2a342490d305677f90f7e936c0f /docs/running-on-mesos.md
parentc4db09ea76802df22f52826e228f9d15c0cf13d9 (diff)
downloadspark-ca2c999e0fd97a29b20bd3990b6e57d9e0db5d0a.tar.gz
spark-ca2c999e0fd97a29b20bd3990b6e57d9e0db5d0a.tar.bz2
spark-ca2c999e0fd97a29b20bd3990b6e57d9e0db5d0a.zip
Making the link to api scaladocs work and migrating other code snippets
to use pygments syntax highlighting.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md23
1 files changed, 17 insertions, 6 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 947de13855..063ffe1e1d 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -21,19 +21,27 @@ To run on a cluster, Spark uses the [Apache Mesos](http://incubator.apache.org/m
* Run `<prefix>/sbin/mesos-start-cluster.sh` on your master to start Mesos. If all goes well, you should see Mesos's web UI on port 8080 of the master machine.
* See Mesos's README file for more information on deploying it.
8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example:
-<pre>new SparkContext("HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))</pre>
+
+{% highlight scala %}
+new SparkContext("HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))
+{% endhighlight %}
### For Spark versions before 0.5:
1. Download and build Spark using the instructions [here]({{ HOME_DIR }}Home).
2. Download either revision 1205738 of Mesos if you're using the master branch of Spark, or the pre-protobuf branch of Mesos if you're using Spark 0.3 or earlier (note that for new users, _we recommend the master branch instead of 0.3_). For revision 1205738 of Mesos, use:
-<pre>
+
+{% highlight bash %}
svn checkout -r 1205738 http://svn.apache.org/repos/asf/incubator/mesos/trunk mesos
-</pre>
+{% endhighlight %}
+
For the pre-protobuf branch (for Spark 0.3 and earlier), use:
-<pre>git clone git://github.com/mesos/mesos
+{% highlight bash %}
+git clone git://github.com/mesos/mesos
cd mesos
-git checkout --track origin/pre-protobuf</pre>
+git checkout --track origin/pre-protobuf
+{% endhighlight %}
+
3. Configure Mesos using the `configure` script, passing the location of your `JAVA_HOME` using `--with-java-home`. Mesos comes with "template" configure scripts for different platforms, such as `configure.template.macosx`, so you can just run the one on your platform if it exists. See the [Mesos wiki](https://github.com/mesos/mesos/wiki) for other configuration options.
4. Build Mesos using `make`.
5. In Spark's `conf/spark-env.sh` file, add `export MESOS_HOME=<path to Mesos directory>`. If you don't have a `spark-env.sh`, copy `conf/spark-env.sh.template`. You should also set `SCALA_HOME` there if it's not on your system's default path.
@@ -43,7 +51,10 @@ git checkout --track origin/pre-protobuf</pre>
* Run `MESOS_HOME/deploy/start-mesos` to start it up. If all goes well, you should see Mesos's web UI on port 8080 of the master machine.
* See Mesos's [deploy instructions](https://github.com/mesos/mesos/wiki/Deploy-Scripts) for more information on deploying it.
8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `master@HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example:
-<pre>new SparkContext("master@HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))</pre>
+
+{% highlight scala %}
+new SparkContext("master@HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))
+{% endhighlight %}
## Running on Amazon EC2