From 6adc7c965f35ede8fb09452e278b2f17981ff600 Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Fri, 16 Nov 2012 20:48:35 -0800 Subject: Doc fix --- docs/running-on-mesos.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs/running-on-mesos.md') diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 97564d7426..f4a3eb667c 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -15,7 +15,7 @@ Spark can run on private clusters managed by the [Apache Mesos](http://incubator 6. Copy Spark and Mesos to the _same_ paths on all the nodes in the cluster (or, for Mesos, `make install` on every node). 7. Configure Mesos for deployment: * On your master node, edit `/var/mesos/deploy/masters` to list your master and `/var/mesos/deploy/slaves` to list the slaves, where `` is the prefix where you installed Mesos (`/usr/local` by default). - * On all nodes, edit `/var/mesos/deploy/mesos.conf` and add the line `master=HOST:5050`, where HOST is your master node. + * On all nodes, edit `/var/mesos/conf/mesos.conf` and add the line `master=HOST:5050`, where HOST is your master node. * Run `/sbin/mesos-start-cluster.sh` on your master to start Mesos. If all goes well, you should see Mesos's web UI on port 8080 of the master machine. * See Mesos's README file for more information on deploying it. 8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `mesos://HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: -- cgit v1.2.3