aboutsummaryrefslogtreecommitdiff
path: root/docs/monitoring.md
diff options
context:
space:
mode:
authorImran Rashid <irashid@cloudera.com>2015-05-08 16:54:32 +0100
committerPatrick Wendell <patrick@databricks.com>2015-05-08 16:54:32 +0100
commitc796be70f36e262b6a2ce75924bd970f40bf4045 (patch)
tree4899b2784c1aed2b16ed065496f9aa30f6094408 /docs/monitoring.md
parentebff7327af5efa9f57c605284de4fae6b050ae0f (diff)
downloadspark-c796be70f36e262b6a2ce75924bd970f40bf4045.tar.gz
spark-c796be70f36e262b6a2ce75924bd970f40bf4045.tar.bz2
spark-c796be70f36e262b6a2ce75924bd970f40bf4045.zip
[SPARK-3454] separate json endpoints for data in the UI
Exposes data available in the UI as json over http. Key points: * new endpoints, handled independently of existing XyzPage classes. Root entrypoint is `JsonRootResource` * Uses jersey + jackson for routing & converting POJOs into json * tests against known results in `HistoryServerSuite` * also fixes some minor issues w/ the UI -- synchronizing on access to `StorageListener` & `StorageStatusListener`, and fixing some inconsistencies w/ the way we handle retained jobs & stages. Author: Imran Rashid <irashid@cloudera.com> Closes #5940 from squito/SPARK-3454_better_test_files and squashes the following commits: 1a72ed6 [Imran Rashid] rats 85fdb3e [Imran Rashid] Merge branch 'no_php' into SPARK-3454 1fc65b0 [Imran Rashid] Revert "Revert "[SPARK-3454] separate json endpoints for data in the UI"" 1276900 [Imran Rashid] get rid of giant event file, replace w/ smaller one; check both shuffle read & shuffle write 4e12013 [Imran Rashid] just use test case name for expectation file name 863ef64 [Imran Rashid] rename json files to avoid strange file names and not look like php
Diffstat (limited to 'docs/monitoring.md')
-rw-r--r--docs/monitoring.md74
1 files changed, 74 insertions, 0 deletions
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 8a85928d6d..1e0fc15086 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -174,6 +174,80 @@ making it easy to identify slow tasks, data skew, etc.
Note that the history server only displays completed Spark jobs. One way to signal the completion of a Spark job is to stop the Spark Context explicitly (`sc.stop()`), or in Python using the `with SparkContext() as sc:` to handle the Spark Context setup and tear down, and still show the job history on the UI.
+## REST API
+
+In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers
+an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for
+both running applications, and in the history server. The endpoints are mounted at `/json/v1`. Eg.,
+for the history server, they would typically be accessible at `http://<server-url>:18080/json/v1`, and
+for a running application, at `http://localhost:4040/json/v1`.
+
+<table class="table">
+ <tr><th>Endpoint</th><th>Meaning</th></tr>
+ <tr>
+ <td><code>/applications</code></td>
+ <td>A list of all applications</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/jobs</code></td>
+ <td>A list of all jobs for a given application</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/jobs/[job-id]</code></td>
+ <td>Details for the given job</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/stages</code></td>
+ <td>A list of all stages for a given application</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/stages/[stage-id]</code></td>
+ <td>A list of all attempts for the given stage</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]</code></td>
+ <td>Details for the given stage attempt</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskSummary</code></td>
+ <td>Summary metrics of all tasks in the given stage attempt</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskList</code></td>
+ <td>A list of all tasks for the given stage attempt</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/executors</code></td>
+ <td>A list of all executors for the given application</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/storage/rdd</code></td>
+ <td>A list of stored RDDs for the given application</td>
+ </tr>
+ <tr>
+ <td><code>/applications/[app-id]/storage/rdd/[rdd-id]</code></td>
+ <td>Details for the storage status of a given RDD</td>
+ </tr>
+</table>
+
+When running on Yarn, each application has multiple attempts, so `[app-id]` is actually
+`[app-id]/[attempt-id]` in all cases.
+
+These endpoints have been strongly versioned to make it easier to develop applications on top.
+ In particular, Spark guarantees:
+
+* Endpoints will never be removed from one version
+* Individual fields will never be removed for any given endpoint
+* New endpoints may be added
+* New fields may be added to existing endpoints
+* New versions of the api may be added in the future at a separate endpoint (eg., `json/v2`). New versions are *not* required to be backwards compatible.
+* Api versions may be dropped, but only after at least one minor release of co-existing with a new api version
+
+Note that even when examining the UI of a running applications, the `applications/[app-id]` portion is
+still required, though there is only one application available. Eg. to see the list of jobs for the
+running app, you would go to `http://localhost:4040/json/v1/applications/[app-id]/jobs`. This is to
+keep the paths consistent in both modes.
+
# Metrics
Spark has a configurable metrics system based on the