From ca2c999e0fd97a29b20bd3990b6e57d9e0db5d0a Mon Sep 17 00:00:00 2001 From: Andy Konwinski Date: Wed, 12 Sep 2012 23:05:47 -0700 Subject: Making the link to api scaladocs work and migrating other code snippets to use pygments syntax highlighting. --- docs/running-on-mesos.md | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) (limited to 'docs/running-on-mesos.md') diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 947de13855..063ffe1e1d 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -21,19 +21,27 @@ To run on a cluster, Spark uses the [Apache Mesos](http://incubator.apache.org/m * Run `/sbin/mesos-start-cluster.sh` on your master to start Mesos. If all goes well, you should see Mesos's web UI on port 8080 of the master machine. * See Mesos's README file for more information on deploying it. 8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: -
new SparkContext("HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))
+ +{% highlight scala %} +new SparkContext("HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar")) +{% endhighlight %} ### For Spark versions before 0.5: 1. Download and build Spark using the instructions [here]({{ HOME_DIR }}Home). 2. Download either revision 1205738 of Mesos if you're using the master branch of Spark, or the pre-protobuf branch of Mesos if you're using Spark 0.3 or earlier (note that for new users, _we recommend the master branch instead of 0.3_). For revision 1205738 of Mesos, use: -
+
+{% highlight bash %}
 svn checkout -r 1205738 http://svn.apache.org/repos/asf/incubator/mesos/trunk mesos
-
+{% endhighlight %} + For the pre-protobuf branch (for Spark 0.3 and earlier), use: -
git clone git://github.com/mesos/mesos
+{% highlight bash %}
+git clone git://github.com/mesos/mesos
 cd mesos
-git checkout --track origin/pre-protobuf
+git checkout --track origin/pre-protobuf +{% endhighlight %} + 3. Configure Mesos using the `configure` script, passing the location of your `JAVA_HOME` using `--with-java-home`. Mesos comes with "template" configure scripts for different platforms, such as `configure.template.macosx`, so you can just run the one on your platform if it exists. See the [Mesos wiki](https://github.com/mesos/mesos/wiki) for other configuration options. 4. Build Mesos using `make`. 5. In Spark's `conf/spark-env.sh` file, add `export MESOS_HOME=`. If you don't have a `spark-env.sh`, copy `conf/spark-env.sh.template`. You should also set `SCALA_HOME` there if it's not on your system's default path. @@ -43,7 +51,10 @@ git checkout --track origin/pre-protobuf * Run `MESOS_HOME/deploy/start-mesos` to start it up. If all goes well, you should see Mesos's web UI on port 8080 of the master machine. * See Mesos's [deploy instructions](https://github.com/mesos/mesos/wiki/Deploy-Scripts) for more information on deploying it. 8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `master@HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: -
new SparkContext("master@HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar"))
+ +{% highlight scala %} +new SparkContext("master@HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar")) +{% endhighlight %} ## Running on Amazon EC2 -- cgit v1.2.3