aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorAlex Bozarth <ajbozart@us.ibm.com>2016-08-30 16:33:54 -0500
committerTom Graves <tgraves@yahoo-inc.com>2016-08-30 16:33:54 -0500
commitf7beae6da02e6b70a34c714e93136becbde7679b (patch)
tree18c0c1d197da796e636a5479e6347cf5d3480357 /docs
parent02ac379e8645ce5d32e033f6683136da16fbe584 (diff)
downloadspark-f7beae6da02e6b70a34c714e93136becbde7679b.tar.gz
spark-f7beae6da02e6b70a34c714e93136becbde7679b.tar.bz2
spark-f7beae6da02e6b70a34c714e93136becbde7679b.zip
[SPARK-17243][WEB UI] Spark 2.0 History Server won't load with very large application history
## What changes were proposed in this pull request? With the new History Server the summary page loads the application list via the the REST API, this makes it very slow to impossible to load with large (10K+) application history. This pr fixes this by adding the `spark.history.ui.maxApplications` conf to limit the number of applications the History Server displays. This is accomplished using a new optional `limit` param for the `applications` api. (Note this only applies to what the summary page displays, all the Application UI's are still accessible if the user knows the App ID and goes to the Application UI directly.) I've also added a new test for the `limit` param in `HistoryServerSuite.scala` ## How was this patch tested? Manual testing and dev/run-tests Author: Alex Bozarth <ajbozart@us.ibm.com> Closes #14835 from ajbozarth/spark17243.
Diffstat (limited to 'docs')
-rw-r--r--docs/monitoring.md16
1 files changed, 13 insertions, 3 deletions
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 6fdf87b4be..5804e4f26c 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -114,8 +114,17 @@ The history server can be configured as follows:
<td>spark.history.retainedApplications</td>
<td>50</td>
<td>
- The number of application UIs to retain. If this cap is exceeded, then the oldest
- applications will be removed.
+ The number of applications to retain UI data for in the cache. If this cap is exceeded, then
+ the oldest applications will be removed from the cache. If an application is not in the cache,
+ it will have to be loaded from disk if its accessed from the UI.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.ui.maxApplications</td>
+ <td>Int.MaxValue</td>
+ <td>
+ The number of applications to display on the history summary page. Application UIs are still
+ available by accessing their URLs directly even if they are not displayed on the history summary page.
</td>
</tr>
<tr>
@@ -242,7 +251,8 @@ can be identified by their `[attempt-id]`. In the API listed below, when running
<br>Examples:
<br><code>?minDate=2015-02-10</code>
<br><code>?minDate=2015-02-03T16:42:40.000GMT</code>
- <br><code>?maxDate=[date]</code> latest date/time to list; uses same format as <code>minDate</code>.</td>
+ <br><code>?maxDate=[date]</code> latest date/time to list; uses same format as <code>minDate</code>.
+ <br><code>?limit=[limit]</code> limits the number of applications listed.</td>
</tr>
<tr>
<td><code>/applications/[app-id]/jobs</code></td>