diff options
author | Andy Konwinski <andyk@berkeley.edu> | 2012-09-12 16:05:19 -0700 |
---|---|---|
committer | Andy Konwinski <andyk@berkeley.edu> | 2012-09-12 16:06:18 -0700 |
commit | 4d3a17c8d768a4e76bfb895ce53715434447cb62 (patch) | |
tree | 35d92aab36165b3ec68209622c260ebb9e3e9147 /docs/ec2-scripts.md | |
parent | 49e98500a9b1f93ab3224c4358dbc56f1e37ff35 (diff) | |
download | spark-4d3a17c8d768a4e76bfb895ce53715434447cb62.tar.gz spark-4d3a17c8d768a4e76bfb895ce53715434447cb62.tar.bz2 spark-4d3a17c8d768a4e76bfb895ce53715434447cb62.zip |
Fixing lots of broken links.
Diffstat (limited to 'docs/ec2-scripts.md')
-rw-r--r-- | docs/ec2-scripts.md | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/docs/ec2-scripts.md b/docs/ec2-scripts.md index 35d28c47d0..6e058ac19b 100644 --- a/docs/ec2-scripts.md +++ b/docs/ec2-scripts.md @@ -122,11 +122,11 @@ root partitions and their `persistent-hdfs`. Stopped machines will not cost you any EC2 cycles, but ***will*** continue to cost money for EBS storage. -- To stop one of your clusters, go into the `ec2` directory and run +- To stop one of your clusters, go into the `ec2` directory and run `./spark-ec2 stop <cluster-name>`. -- To restart it later, run +- To restart it later, run `./spark-ec2 -i <key-file> start <cluster-name>`. -- To ultimately destroy the cluster and stop consuming EBS space, run +- To ultimately destroy the cluster and stop consuming EBS space, run `./spark-ec2 destroy <cluster-name>` as described in the previous section. @@ -137,10 +137,10 @@ Limitations It should not be hard to make it launch VMs in other zones, but you will need to create your own AMIs in them. - Support for "cluster compute" nodes is limited -- there's no way to specify a - locality group. However, you can launch slave nodes in your `<clusterName>-slaves` - group manually and then use `spark-ec2 launch --resume` to start a cluster with - them. + locality group. However, you can launch slave nodes in your + `<clusterName>-slaves` group manually and then use `spark-ec2 launch + --resume` to start a cluster with them. - Support for spot instances is limited. If you have a patch or suggestion for one of these limitations, feel free to -[[contribute|Contributing to Spark]] it! +[ |