diff options
-rw-r--r-- | docs/running-on-mesos.md | 2 | ||||
-rwxr-xr-x | ec2/spark_ec2.py | 14 |
2 files changed, 15 insertions, 1 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 97564d7426..f4a3eb667c 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -15,7 +15,7 @@ Spark can run on private clusters managed by the [Apache Mesos](http://incubator 6. Copy Spark and Mesos to the _same_ paths on all the nodes in the cluster (or, for Mesos, `make install` on every node). 7. Configure Mesos for deployment: * On your master node, edit `<prefix>/var/mesos/deploy/masters` to list your master and `<prefix>/var/mesos/deploy/slaves` to list the slaves, where `<prefix>` is the prefix where you installed Mesos (`/usr/local` by default). - * On all nodes, edit `<prefix>/var/mesos/deploy/mesos.conf` and add the line `master=HOST:5050`, where HOST is your master node. + * On all nodes, edit `<prefix>/var/mesos/conf/mesos.conf` and add the line `master=HOST:5050`, where HOST is your master node. * Run `<prefix>/sbin/mesos-start-cluster.sh` on your master to start Mesos. If all goes well, you should see Mesos's web UI on port 8080 of the master machine. * See Mesos's README file for more information on deploying it. 8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `mesos://HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: diff --git a/ec2/spark_ec2.py b/ec2/spark_ec2.py index 2ca4d8020d..17276db6e5 100755 --- a/ec2/spark_ec2.py +++ b/ec2/spark_ec2.py @@ -509,6 +509,20 @@ def main(): print "Terminating zoo..." for inst in zoo_nodes: inst.terminate() + # Delete security groups as well + group_names = [cluster_name + "-master", cluster_name + "-slaves", cluster_name + "-zoo"] + groups = conn.get_all_security_groups() + for group in groups: + if group.name in group_names: + print "Deleting security group " + group.name + # Delete individual rules before deleting group to remove dependencies + for rule in group.rules: + for grant in rule.grants: + group.revoke(ip_protocol=rule.ip_protocol, + from_port=rule.from_port, + to_port=rule.to_port, + src_group=grant) + conn.delete_security_group(group.name) elif action == "login": (master_nodes, slave_nodes, zoo_nodes) = get_existing_cluster( |