aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMridul Muralidharan <mridul@gmail.com>2013-05-16 17:50:22 +0530
committerMridul Muralidharan <mridul@gmail.com>2013-05-16 17:50:22 +0530
commitf16c781709f9e108d9fe8ac052fb55146ce8a14f (patch)
tree913a2cf2f4e49a680dc26d33247effc849208d5c /docs
parentfeddd2530ddfac7a01b03c9113b29945ec0e9a82 (diff)
downloadspark-f16c781709f9e108d9fe8ac052fb55146ce8a14f.tar.gz
spark-f16c781709f9e108d9fe8ac052fb55146ce8a14f.tar.bz2
spark-f16c781709f9e108d9fe8ac052fb55146ce8a14f.zip
Fix documentation to use yarn-standalone as master
Diffstat (limited to 'docs')
-rw-r--r--docs/running-on-yarn.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 41c0b235dd..2e46ff0ed1 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -69,7 +69,7 @@ For example:
SPARK_JAR=./core/target/spark-core-assembly-{{site.SPARK_VERSION}}.jar ./run spark.deploy.yarn.Client \
--jar examples/target/scala-{{site.SCALA_VERSION}}/spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}.jar \
--class spark.examples.SparkPi \
- --args standalone \
+ --args yarn-standalone \
--num-workers 3 \
--master-memory 4g \
--worker-memory 2g \
@@ -79,7 +79,7 @@ The above starts a YARN Client programs which periodically polls the Application
# Important Notes
-- When your application instantiates a Spark context it must use a special "standalone" master url. This starts the scheduler without forcing it to connect to a cluster. A good way to handle this is to pass "standalone" as an argument to your program, as shown in the example above.
+- When your application instantiates a Spark context it must use a special "yarn-standalone" master url. This starts the scheduler without forcing it to connect to a cluster. A good way to handle this is to pass "yarn-standalone" as an argument to your program, as shown in the example above.
- We do not requesting container resources based on the number of cores. Thus the numbers of cores given via command line arguments cannot be guaranteed.
- Currently, we have not yet integrated with hadoop security. If --user is present, the hadoop_user specified will be used to run the tasks on the cluster. If unspecified, current user will be used (which should be valid in cluster).
Once hadoop security support is added, and if hadoop cluster is enabled with security, additional restrictions would apply via delegation tokens passed.