aboutsummaryrefslogtreecommitdiff
path: root/docs/monitoring.md
diff options
context:
space:
mode:
authorAndrew Or <andrewor14@gmail.com>2014-04-10 10:39:34 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-04-10 10:39:34 -0700
commit79820fe825ed7c09d55f50503b7ab53d4585e5f7 (patch)
tree7263a0fa769d72e04c46b934295041224ed07a41 /docs/monitoring.md
parenta74fbbbca8f0d89b2e0e4e8751a93d33efc4fa9e (diff)
downloadspark-79820fe825ed7c09d55f50503b7ab53d4585e5f7.tar.gz
spark-79820fe825ed7c09d55f50503b7ab53d4585e5f7.tar.bz2
spark-79820fe825ed7c09d55f50503b7ab53d4585e5f7.zip
[SPARK-1276] Add a HistoryServer to render persisted UI
The new feature of event logging, introduced in #42, allows the user to persist the details of his/her Spark application to storage, and later replay these events to reconstruct an after-the-fact SparkUI. Currently, however, a persisted UI can only be rendered through the standalone Master. This greatly limits the use case of this new feature as many people also run Spark on Yarn / Mesos. This PR introduces a new entity called the HistoryServer, which, given a log directory, keeps track of all completed applications independently of a Spark Master. Unlike Master, the HistoryServer needs not be running while the application is still running. It is relatively light-weight in that it only maintains static information of applications and performs no scheduling. To quickly test it out, generate event logs with ```spark.eventLog.enabled=true``` and run ```sbin/start-history-server.sh <log-dir-path>```. Your HistoryServer awaits on port 18080. Comments and feedback are most welcome. --- A few other changes introduced in this PR include refactoring the WebUI interface, which is beginning to have a lot of duplicate code now that we have added more functionality to it. Two new SparkListenerEvents have been introduced (SparkListenerApplicationStart/End) to keep track of application name and start/finish times. This PR also clarifies the semantics of the ReplayListenerBus introduced in #42. A potential TODO in the future (not part of this PR) is to render live applications in addition to just completed applications. This is useful when applications fail, a condition that our current HistoryServer does not handle unless the user manually signals application completion (by creating the APPLICATION_COMPLETION file). Handling live applications becomes significantly more challenging, however, because it is now necessary to render the same SparkUI multiple times. To avoid reading the entire log every time, which is inefficient, we must handle reading the log from where we previously left off, but this becomes fairly complicated because we must deal with the arbitrary behavior of each input stream. Author: Andrew Or <andrewor14@gmail.com> Closes #204 from andrewor14/master and squashes the following commits: 7b7234c [Andrew Or] Finished -> Completed b158d98 [Andrew Or] Address Patrick's comments 69d1b41 [Andrew Or] Do not block on posting SparkListenerApplicationEnd 19d5dd0 [Andrew Or] Merge github.com:apache/spark f7f5bf0 [Andrew Or] Make history server's web UI port a Spark configuration 2dfb494 [Andrew Or] Decouple checking for application completion from replaying d02dbaa [Andrew Or] Expose Spark version and include it in event logs 2282300 [Andrew Or] Add documentation for the HistoryServer 567474a [Andrew Or] Merge github.com:apache/spark 6edf052 [Andrew Or] Merge github.com:apache/spark 19e1fb4 [Andrew Or] Address Thomas' comments 248cb3d [Andrew Or] Limit number of live applications + add configurability a3598de [Andrew Or] Do not close file system with ReplayBus + fix bind address bc46fc8 [Andrew Or] Merge github.com:apache/spark e2f4ff9 [Andrew Or] Merge github.com:apache/spark 050419e [Andrew Or] Merge github.com:apache/spark 81b568b [Andrew Or] Fix strange error messages... 0670743 [Andrew Or] Decouple page rendering from loading files from disk 1b2f391 [Andrew Or] Minor changes a9eae7e [Andrew Or] Merge branch 'master' of github.com:apache/spark d5154da [Andrew Or] Styling and comments 5dbfbb4 [Andrew Or] Merge branch 'master' of github.com:apache/spark 60bc6d5 [Andrew Or] First complete implementation of HistoryServer (only for finished apps) 7584418 [Andrew Or] Report application start/end times to HistoryServer 8aac163 [Andrew Or] Add basic application table c086bd5 [Andrew Or] Add HistoryServer and scripts ++ Refactor WebUI interface
Diffstat (limited to 'docs/monitoring.md')
-rw-r--r--docs/monitoring.md70
1 files changed, 65 insertions, 5 deletions
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 15bfb04178..4c91c3a592 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -12,17 +12,77 @@ displays useful information about the application. This includes:
* A list of scheduler stages and tasks
* A summary of RDD sizes and memory usage
-* Information about the running executors
* Environmental information.
+* Information about the running executors
You can access this interface by simply opening `http://<driver-node>:4040` in a web browser.
-If multiple SparkContexts are running on the same host, they will bind to succesive ports
+If multiple SparkContexts are running on the same host, they will bind to successive ports
beginning with 4040 (4041, 4042, etc).
-Spark's Standalone Mode cluster manager also has its own
-[web UI](spark-standalone.html#monitoring-and-logging).
+Note that this information is only available for the duration of the application by default.
+To view the web UI after the fact, set `spark.eventLog.enabled` to true before starting the
+application. This configures Spark to log Spark events that encode the information displayed
+in the UI to persisted storage.
-Note that in both of these UIs, the tables are sortable by clicking their headers,
+## Viewing After the Fact
+
+Spark's Standalone Mode cluster manager also has its own
+[web UI](spark-standalone.html#monitoring-and-logging). If an application has logged events over
+the course of its lifetime, then the Standalone master's web UI will automatically re-render the
+application's UI after the application has finished.
+
+If Spark is run on Mesos or YARN, it is still possible to reconstruct the UI of a finished
+application through Spark's history server, provided that the application's event logs exist.
+You can start a the history server by executing:
+
+ ./sbin/start-history-server.sh <base-logging-directory>
+
+The base logging directory must be supplied, and should contain sub-directories that each
+represents an application's event logs. This creates a web interface at
+`http://<server-url>:18080` by default. The history server depends on the following variables:
+
+<table class="table">
+ <tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr>
+ <tr>
+ <td><code>SPARK_DAEMON_MEMORY</code></td>
+ <td>Memory to allocate to the history server. (default: 512m).</td>
+ </tr>
+ <tr>
+ <td><code>SPARK_DAEMON_JAVA_OPTS</code></td>
+ <td>JVM options for the history server (default: none).</td>
+ </tr>
+</table>
+
+Further, the history server can be configured as follows:
+
+<table class="table">
+ <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+ <tr>
+ <td>spark.history.updateInterval</td>
+ <td>10</td>
+ <td>
+ The period, in seconds, at which information displayed by this history server is updated.
+ Each update checks for any changes made to the event logs in persisted storage.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.retainedApplications</td>
+ <td>250</td>
+ <td>
+ The number of application UIs to retain. If this cap is exceeded, then the oldest
+ applications will be removed.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.ui.port</td>
+ <td>18080</td>
+ <td>
+ The port to which the web interface of the history server binds.
+ </td>
+ </tr>
+</table>
+
+Note that in all of these UIs, the tables are sortable by clicking their headers,
making it easy to identify slow tasks, data skew, etc.
# Metrics