aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-yarn.md
diff options
context:
space:
mode:
authorY.CORP.YAHOO.COM\tgraves <tgraves@thatenemy-lm.champ.corp.yahoo.com>2013-08-26 14:29:24 -0500
committerY.CORP.YAHOO.COM\tgraves <tgraves@thatenemy-lm.champ.corp.yahoo.com>2013-08-26 14:29:24 -0500
commit6dd64e8bb2256b56e0908c628ebdb3b533adf432 (patch)
tree9eb1429738282f867e321b556047f41ea3e55897 /docs/running-on-yarn.md
parentdfb4c697bcfcbbe7e0959894244e71f38edd79f9 (diff)
downloadspark-6dd64e8bb2256b56e0908c628ebdb3b533adf432.tar.gz
spark-6dd64e8bb2256b56e0908c628ebdb3b533adf432.tar.bz2
spark-6dd64e8bb2256b56e0908c628ebdb3b533adf432.zip
Update docs and remove old reference to --user option
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r--docs/running-on-yarn.md4
1 files changed, 1 insertions, 3 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 6bada9bdd7..7a344b3ce2 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -57,7 +57,6 @@ The command to launch the YARN Client is as follows:
--master-memory <MEMORY_FOR_MASTER> \
--worker-memory <MEMORY_PER_WORKER> \
--worker-cores <CORES_PER_WORKER> \
- --user <hadoop_user> \
--queue <queue_name>
For example:
@@ -77,5 +76,4 @@ The above starts a YARN Client programs which periodically polls the Application
- When your application instantiates a Spark context it must use a special "yarn-standalone" master url. This starts the scheduler without forcing it to connect to a cluster. A good way to handle this is to pass "yarn-standalone" as an argument to your program, as shown in the example above.
- We do not requesting container resources based on the number of cores. Thus the numbers of cores given via command line arguments cannot be guaranteed.
-- Currently, we have not yet integrated with hadoop security. If --user is present, the hadoop_user specified will be used to run the tasks on the cluster. If unspecified, current user will be used (which should be valid in cluster).
- Once hadoop security support is added, and if hadoop cluster is enabled with security, additional restrictions would apply via delegation tokens passed.
+- The local directories used for spark will be the local directories configured for YARN (Hadoop Yarn config yarn.nodemanager.local-dirs). If the user specifies spark.local.dir, it will be ignored.