aboutsummaryrefslogtreecommitdiff
path: root/docs/spark-standalone.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2012-10-08 10:13:26 -0700
committerAndy Konwinski <andyk@berkeley.edu>2012-10-08 10:30:38 -0700
commit45d03231d0961677ea0372d36977cecf21ab62d0 (patch)
tree0928e51cf925b7b9baeda863e99dd936476a28d5 /docs/spark-standalone.md
parentefc5423210d1aadeaea78273a4a8f10425753079 (diff)
downloadspark-45d03231d0961677ea0372d36977cecf21ab62d0.tar.gz
spark-45d03231d0961677ea0372d36977cecf21ab62d0.tar.bz2
spark-45d03231d0961677ea0372d36977cecf21ab62d0.zip
Adds liquid variables to docs templating system so that they can be used
throughout the docs: SPARK_VERSION, SCALA_VERSION, and MESOS_VERSION. To use them, e.g. use {{site.SPARK_VERSION}}. Also removes uses of {{HOME_PATH}} which were being resolved to "" by the templating system anyway.
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r--docs/spark-standalone.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 7bad006a23..ae630a0371 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -68,7 +68,7 @@ Finally, the following configuration options can be passed to the master and wor
To launch a Spark standalone cluster with the deploy scripts, you need to set up two files, `conf/spark-env.sh` and `conf/slaves`. The `conf/spark-env.sh` file lets you specify global settings for the master and slave instances, such as memory, or port numbers to bind to, while `conf/slaves` is a list of slave nodes. The system requires that all the slave machines have the same configuration files, so *copy these files to each machine*.
-In `conf/spark-env.sh`, you can set the following parameters, in addition to the [standard Spark configuration settongs]({{HOME_PATH}}configuration.html):
+In `conf/spark-env.sh`, you can set the following parameters, in addition to the [standard Spark configuration settongs](configuration.html):
<table class="table">
<tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr>
@@ -123,7 +123,7 @@ Note that the scripts must be executed on the machine you want to run the Spark
# Connecting a Job to the Cluster
To run a job on the Spark cluster, simply pass the `spark://IP:PORT` URL of the master as to the [`SparkContext`
-constructor]({{HOME_PATH}}scala-programming-guide.html#initializing-spark).
+constructor](scala-programming-guide.html#initializing-spark).
To run an interactive Spark shell against the cluster, run the following command: