aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorWeiqing Yang <yangweiqing001@gmail.com>2016-11-17 11:13:22 +0000
committerSean Owen <sowen@cloudera.com>2016-11-17 11:13:22 +0000
commita3cac7bd86a6fe8e9b42da1bf580aaeb59378304 (patch)
treea42022fbf0d01726f33aa40355cdec13e06e850d
parent07b3f045cd6f79b92bc86b3b1b51d3d5e6bd37ce (diff)
downloadspark-a3cac7bd86a6fe8e9b42da1bf580aaeb59378304.tar.gz
spark-a3cac7bd86a6fe8e9b42da1bf580aaeb59378304.tar.bz2
spark-a3cac7bd86a6fe8e9b42da1bf580aaeb59378304.zip
[YARN][DOC] Remove non-Yarn specific configurations from running-on-yarn.md
## What changes were proposed in this pull request? Remove `spark.driver.memory`, `spark.executor.memory`, `spark.driver.cores`, and `spark.executor.cores` from `running-on-yarn.md` as they are not Yarn-specific, and they are also defined in`configuration.md`. ## How was this patch tested? Build passed & Manually check. Author: Weiqing Yang <yangweiqing001@gmail.com> Closes #15869 from weiqingy/yarnDoc.
-rw-r--r--docs/running-on-yarn.md36
1 files changed, 0 insertions, 36 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index fe0221ce7c..4d1fafc07b 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -118,28 +118,6 @@ To use a custom metrics.properties for the application master and executors, upd
</td>
</tr>
<tr>
- <td><code>spark.driver.memory</code></td>
- <td>1g</td>
- <td>
- Amount of memory to use for the driver process, i.e. where SparkContext is initialized.
- (e.g. <code>1g</code>, <code>2g</code>).
-
- <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
- directly in your application, because the driver JVM has already started at that point.
- Instead, please set this through the <code>--driver-memory</code> command line option
- or in your default properties file.
- </td>
-</tr>
-<tr>
- <td><code>spark.driver.cores</code></td>
- <td><code>1</code></td>
- <td>
- Number of cores used by the driver in YARN cluster mode.
- Since the driver is run in the same JVM as the YARN Application Master in cluster mode, this also controls the cores used by the YARN Application Master.
- In client mode, use <code>spark.yarn.am.cores</code> to control the number of cores used by the YARN Application Master instead.
- </td>
-</tr>
-<tr>
<td><code>spark.yarn.am.cores</code></td>
<td><code>1</code></td>
<td>
@@ -234,13 +212,6 @@ To use a custom metrics.properties for the application master and executors, upd
</td>
</tr>
<tr>
- <td><code>spark.executor.cores</code></td>
- <td>1 in YARN mode, all the available cores on the worker in standalone mode.</td>
- <td>
- The number of cores to use on each executor. For YARN and standalone mode only.
- </td>
-</tr>
-<tr>
<td><code>spark.executor.instances</code></td>
<td><code>2</code></td>
<td>
@@ -248,13 +219,6 @@ To use a custom metrics.properties for the application master and executors, upd
</td>
</tr>
<tr>
- <td><code>spark.executor.memory</code></td>
- <td>1g</td>
- <td>
- Amount of memory to use per executor process (e.g. <code>2g</code>, <code>8g</code>).
- </td>
-</tr>
-<tr>
<td><code>spark.yarn.executor.memoryOverhead</code></td>
<td>executorMemory * 0.10, with minimum of 384 </td>
<td>