aboutsummaryrefslogtreecommitdiff
path: root/docs/ec2-scripts.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-09-12 19:38:15 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-09-12 19:38:15 -0700
commit35e17be8408d126e8daa2ba6a42508074917e681 (patch)
tree8813df080f1c04e276c0134c97f24c55d4d43cb7 /docs/ec2-scripts.md
parentb4dfa25c8a6dc242cf36b5558ed19672f0ea99c3 (diff)
parentc92e6169cf83d0fb87220999db993869912e6438 (diff)
downloadspark-35e17be8408d126e8daa2ba6a42508074917e681.tar.gz
spark-35e17be8408d126e8daa2ba6a42508074917e681.tar.bz2
spark-35e17be8408d126e8daa2ba6a42508074917e681.zip
Merge branch 'dev' of github.com:mesos/spark into dev
Diffstat (limited to 'docs/ec2-scripts.md')
-rw-r--r--docs/ec2-scripts.md14
1 files changed, 7 insertions, 7 deletions
diff --git a/docs/ec2-scripts.md b/docs/ec2-scripts.md
index 35d28c47d0..73578c8457 100644
--- a/docs/ec2-scripts.md
+++ b/docs/ec2-scripts.md
@@ -122,11 +122,11 @@ root partitions and their `persistent-hdfs`. Stopped machines will not
cost you any EC2 cycles, but ***will*** continue to cost money for EBS
storage.
-- To stop one of your clusters, go into the `ec2` directory and run
+- To stop one of your clusters, go into the `ec2` directory and run
`./spark-ec2 stop <cluster-name>`.
-- To restart it later, run
+- To restart it later, run
`./spark-ec2 -i <key-file> start <cluster-name>`.
-- To ultimately destroy the cluster and stop consuming EBS space, run
+- To ultimately destroy the cluster and stop consuming EBS space, run
`./spark-ec2 destroy <cluster-name>` as described in the previous
section.
@@ -137,10 +137,10 @@ Limitations
It should not be hard to make it launch VMs in other zones, but you will need
to create your own AMIs in them.
- Support for "cluster compute" nodes is limited -- there's no way to specify a
- locality group. However, you can launch slave nodes in your `<clusterName>-slaves`
- group manually and then use `spark-ec2 launch --resume` to start a cluster with
- them.
+ locality group. However, you can launch slave nodes in your
+ `<clusterName>-slaves` group manually and then use `spark-ec2 launch
+ --resume` to start a cluster with them.
- Support for spot instances is limited.
If you have a patch or suggestion for one of these limitations, feel free to
-[[contribute|Contributing to Spark]] it!
+[contribute]({{HOME_PATH}}contributing-to-spark.html) it!