summaryrefslogtreecommitdiff
path: root/site/docs/1.0.1/monitoring.html
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@apache.org>2014-07-11 17:23:23 +0000
committerPatrick Wendell <pwendell@apache.org>2014-07-11 17:23:23 +0000
commit0beac4e243f85e71554fe04093b09eb1745fea82 (patch)
treebc20d10426c5d57e2f189305865dc2bbec447923 /site/docs/1.0.1/monitoring.html
parentddec2123ba6ab95543d1b250d4f20fb811c48f09 (diff)
downloadspark-website-0beac4e243f85e71554fe04093b09eb1745fea82.tar.gz
spark-website-0beac4e243f85e71554fe04093b09eb1745fea82.tar.bz2
spark-website-0beac4e243f85e71554fe04093b09eb1745fea82.zip
Updating docs for 1.0.1 release
Diffstat (limited to 'site/docs/1.0.1/monitoring.html')
-rw-r--r--site/docs/1.0.1/monitoring.html358
1 files changed, 358 insertions, 0 deletions
diff --git a/site/docs/1.0.1/monitoring.html b/site/docs/1.0.1/monitoring.html
new file mode 100644
index 000000000..36f3e83a2
--- /dev/null
+++ b/site/docs/1.0.1/monitoring.html
@@ -0,0 +1,358 @@
+<!DOCTYPE html>
+<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
+<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
+<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
+ <head>
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+ <title>Monitoring and Instrumentation - Spark 1.0.1 Documentation</title>
+ <meta name="description" content="">
+
+
+
+ <link rel="stylesheet" href="css/bootstrap.min.css">
+ <style>
+ body {
+ padding-top: 60px;
+ padding-bottom: 40px;
+ }
+ </style>
+ <meta name="viewport" content="width=device-width">
+ <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
+ <link rel="stylesheet" href="css/main.css">
+
+ <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
+
+ <link rel="stylesheet" href="css/pygments-default.css">
+
+
+ <!-- Google analytics script -->
+ <script type="text/javascript">
+ var _gaq = _gaq || [];
+ _gaq.push(['_setAccount', 'UA-32518208-1']);
+ _gaq.push(['_trackPageview']);
+
+ (function() {
+ var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+ ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+ var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+ })();
+ </script>
+
+
+ </head>
+ <body>
+ <!--[if lt IE 7]>
+ <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
+ <![endif]-->
+
+ <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
+
+ <div class="navbar navbar-fixed-top" id="topbar">
+ <div class="navbar-inner">
+ <div class="container">
+ <div class="brand"><a href="index.html">
+ <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.0.1</span>
+ </div>
+ <ul class="nav">
+ <!--TODO(andyk): Add class="active" attribute to li some how.-->
+ <li><a href="index.html">Overview</a></li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="quick-start.html">Quick Start</a></li>
+ <li><a href="programming-guide.html">Spark Programming Guide</a></li>
+ <li class="divider"></li>
+ <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
+ <li><a href="sql-programming-guide.html">Spark SQL</a></li>
+ <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
+ <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
+ <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="api/scala/index.html#org.apache.spark.package">Scaladoc</a></li>
+ <li><a href="api/java/index.html">Javadoc</a></li>
+ <li><a href="api/python/index.html">Python API</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="cluster-overview.html">Overview</a></li>
+ <li><a href="submitting-applications.html">Submitting Applications</a></li>
+ <li class="divider"></li>
+ <li><a href="ec2-scripts.html">Amazon EC2</a></li>
+ <li><a href="spark-standalone.html">Standalone Mode</a></li>
+ <li><a href="running-on-mesos.html">Mesos</a></li>
+ <li><a href="running-on-yarn.html">YARN</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="configuration.html">Configuration</a></li>
+ <li><a href="monitoring.html">Monitoring</a></li>
+ <li><a href="tuning.html">Tuning Guide</a></li>
+ <li><a href="job-scheduling.html">Job Scheduling</a></li>
+ <li><a href="security.html">Security</a></li>
+ <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
+ <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li>
+ <li class="divider"></li>
+ <li><a href="building-with-maven.html">Building Spark with Maven</a></li>
+ <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
+ </ul>
+ </li>
+ </ul>
+ <!--<p class="navbar-text pull-right"><span class="version-text">v1.0.1</span></p>-->
+ </div>
+ </div>
+ </div>
+
+ <div class="container" id="content">
+
+ <h1 class="title">Monitoring and Instrumentation</h1>
+
+
+ <p>There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.</p>
+
+<h1 id="web-interfaces">Web Interfaces</h1>
+
+<p>Every SparkContext launches a web UI, by default on port 4040, that
+displays useful information about the application. This includes:</p>
+
+<ul>
+ <li>A list of scheduler stages and tasks</li>
+ <li>A summary of RDD sizes and memory usage</li>
+ <li>Environmental information.</li>
+ <li>Information about the running executors</li>
+</ul>
+
+<p>You can access this interface by simply opening <code>http://&lt;driver-node&gt;:4040</code> in a web browser.
+If multiple SparkContexts are running on the same host, they will bind to successive ports
+beginning with 4040 (4041, 4042, etc).</p>
+
+<p>Note that this information is only available for the duration of the application by default.
+To view the web UI after the fact, set <code>spark.eventLog.enabled</code> to true before starting the
+application. This configures Spark to log Spark events that encode the information displayed
+in the UI to persisted storage.</p>
+
+<h2 id="viewing-after-the-fact">Viewing After the Fact</h2>
+
+<p>Spark&#8217;s Standalone Mode cluster manager also has its own
+<a href="spark-standalone.html#monitoring-and-logging">web UI</a>. If an application has logged events over
+the course of its lifetime, then the Standalone master&#8217;s web UI will automatically re-render the
+application&#8217;s UI after the application has finished.</p>
+
+<p>If Spark is run on Mesos or YARN, it is still possible to reconstruct the UI of a finished
+application through Spark&#8217;s history server, provided that the application&#8217;s event logs exist.
+You can start a the history server by executing:</p>
+
+<pre><code>./sbin/start-history-server.sh &lt;base-logging-directory&gt;
+</code></pre>
+
+<p>The base logging directory must be supplied, and should contain sub-directories that each
+represents an application&#8217;s event logs. This creates a web interface at
+<code>http://&lt;server-url&gt;:18080</code> by default. The history server can be configured as follows:</p>
+
+<table class="table">
+ <tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr>
+ <tr>
+ <td><code>SPARK_DAEMON_MEMORY</code></td>
+ <td>Memory to allocate to the history server (default: 512m).</td>
+ </tr>
+ <tr>
+ <td><code>SPARK_DAEMON_JAVA_OPTS</code></td>
+ <td>JVM options for the history server (default: none).</td>
+ </tr>
+ <tr>
+ <td><code>SPARK_PUBLIC_DNS</code></td>
+ <td>
+ The public address for the history server. If this is not set, links to application history
+ may use the internal address of the server, resulting in broken links (default: none).
+ </td>
+ </tr>
+ <tr>
+ <td><code>SPARK_HISTORY_OPTS</code></td>
+ <td>
+ <code>spark.history.*</code> configuration options for the history server (default: none).
+ </td>
+ </tr>
+</table>
+
+<table class="table">
+ <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+ <tr>
+ <td>spark.history.updateInterval</td>
+ <td>10</td>
+ <td>
+ The period, in seconds, at which information displayed by this history server is updated.
+ Each update checks for any changes made to the event logs in persisted storage.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.retainedApplications</td>
+ <td>250</td>
+ <td>
+ The number of application UIs to retain. If this cap is exceeded, then the oldest
+ applications will be removed.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.ui.port</td>
+ <td>18080</td>
+ <td>
+ The port to which the web interface of the history server binds.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.kerberos.enabled</td>
+ <td>false</td>
+ <td>
+ Indicates whether the history server should use kerberos to login. This is useful
+ if the history server is accessing HDFS files on a secure Hadoop cluster. If this is
+ true it looks uses the configs <code>spark.history.kerberos.principal</code> and
+ <code>spark.history.kerberos.keytab</code>.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.kerberos.principal</td>
+ <td>(none)</td>
+ <td>
+ Kerberos principal name for the History Server.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.kerberos.keytab</td>
+ <td>(none)</td>
+ <td>
+ Location of the kerberos keytab file for the History Server.
+ </td>
+ </tr>
+ <tr>
+ <td>spark.history.ui.acls.enable</td>
+ <td>false</td>
+ <td>
+ Specifies whether acls should be checked to authorize users viewing the applications.
+ If enabled, access control checks are made regardless of what the individual application had
+ set for <code>spark.ui.acls.enable</code> when the application was run. The application owner
+ will always have authorization to view their own application and any users specified via
+ <code>spark.ui.view.acls</code> when the application was run will also have authorization
+ to view that application.
+ If disabled, no access control checks are made.
+ </td>
+ </tr>
+</table>
+
+<p>Note that in all of these UIs, the tables are sortable by clicking their headers,
+making it easy to identify slow tasks, data skew, etc.</p>
+
+<h1 id="metrics">Metrics</h1>
+
+<p>Spark has a configurable metrics system based on the
+<a href="http://metrics.codahale.com/">Coda Hale Metrics Library</a>.
+This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV
+files. The metrics system is configured via a configuration file that Spark expects to be present
+at <code>$SPARK_HOME/conf/metrics.properties</code>. A custom file location can be specified via the
+<code>spark.metrics.conf</code> <a href="configuration.html#spark-properties">configuration property</a>.
+Spark&#8217;s metrics are decoupled into different
+<em>instances</em> corresponding to Spark components. Within each instance, you can configure a
+set of sinks to which metrics are reported. The following instances are currently supported:</p>
+
+<ul>
+ <li><code>master</code>: The Spark standalone master process.</li>
+ <li><code>applications</code>: A component within the master which reports on various applications.</li>
+ <li><code>worker</code>: A Spark standalone worker process.</li>
+ <li><code>executor</code>: A Spark executor.</li>
+ <li><code>driver</code>: The Spark driver process (the process in which your SparkContext is created).</li>
+</ul>
+
+<p>Each instance can report to zero or more <em>sinks</em>. Sinks are contained in the
+<code>org.apache.spark.metrics.sink</code> package:</p>
+
+<ul>
+ <li><code>ConsoleSink</code>: Logs metrics information to the console.</li>
+ <li><code>CSVSink</code>: Exports metrics data to CSV files at regular intervals.</li>
+ <li><code>JmxSink</code>: Registers metrics for viewing in a JMX console.</li>
+ <li><code>MetricsServlet</code>: Adds a servlet within the existing Spark UI to serve metrics data as JSON data.</li>
+ <li><code>GraphiteSink</code>: Sends metrics to a Graphite node.</li>
+</ul>
+
+<p>Spark also supports a Ganglia sink which is not included in the default build due to
+licensing restrictions:</p>
+
+<ul>
+ <li><code>GangliaSink</code>: Sends metrics to a Ganglia node or multicast group.</li>
+</ul>
+
+<p>To install the <code>GangliaSink</code> you&#8217;ll need to perform a custom build of Spark. <em><strong>Note that
+by embedding this library you will include <a href="http://www.gnu.org/copyleft/lesser.html">LGPL</a>-licensed
+code in your Spark package</strong></em>. For sbt users, set the
+<code>SPARK_GANGLIA_LGPL</code> environment variable before building. For Maven users, enable
+the <code>-Pspark-ganglia-lgpl</code> profile. In addition to modifying the cluster&#8217;s Spark build
+user applications will need to link to the <code>spark-ganglia-lgpl</code> artifact.</p>
+
+<p>The syntax of the metrics configuration file is defined in an example configuration file,
+<code>$SPARK_HOME/conf/metrics.properties.template</code>.</p>
+
+<h1 id="advanced-instrumentation">Advanced Instrumentation</h1>
+
+<p>Several external tools can be used to help profile the performance of Spark jobs:</p>
+
+<ul>
+ <li>Cluster-wide monitoring tools, such as <a href="http://ganglia.sourceforge.net/">Ganglia</a>, can provide
+insight into overall cluster utilization and resource bottlenecks. For instance, a Ganglia
+dashboard can quickly reveal whether a particular workload is disk bound, network bound, or
+CPU bound.</li>
+ <li>OS profiling tools such as <a href="http://dag.wieers.com/home-made/dstat/">dstat</a>,
+<a href="http://linux.die.net/man/1/iostat">iostat</a>, and <a href="http://linux.die.net/man/1/iotop">iotop</a>
+can provide fine-grained profiling on individual nodes.</li>
+ <li>JVM utilities such as <code>jstack</code> for providing stack traces, <code>jmap</code> for creating heap-dumps,
+<code>jstat</code> for reporting time-series statistics and <code>jconsole</code> for visually exploring various JVM
+properties are useful for those comfortable with JVM internals.</li>
+</ul>
+
+
+ </div> <!-- /container -->
+
+ <script src="js/vendor/jquery-1.8.0.min.js"></script>
+ <script src="js/vendor/bootstrap.min.js"></script>
+ <script src="js/main.js"></script>
+
+ <!-- MathJax Section -->
+ <script type="text/x-mathjax-config">
+ MathJax.Hub.Config({
+ TeX: { equationNumbers: { autoNumber: "AMS" } }
+ });
+ </script>
+ <script>
+ // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
+ // We could use "//cdn.mathjax...", but that won't support "file://".
+ (function(d, script) {
+ script = d.createElement('script');
+ script.type = 'text/javascript';
+ script.async = true;
+ script.onload = function(){
+ MathJax.Hub.Config({
+ tex2jax: {
+ inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
+ displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
+ processEscapes: true,
+ skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+ }
+ });
+ };
+ script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
+ 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
+ d.getElementsByTagName('head')[0].appendChild(script);
+ }(document));
+ </script>
+ </body>
+</html>