aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--conf/metrics.properties.template2
-rw-r--r--docs/monitoring.md6
-rw-r--r--docs/spark-standalone.md2
3 files changed, 5 insertions, 5 deletions
diff --git a/conf/metrics.properties.template b/conf/metrics.properties.template
index 1c3d94e1b0..30bcab0c93 100644
--- a/conf/metrics.properties.template
+++ b/conf/metrics.properties.template
@@ -67,7 +67,7 @@
# period 10 Poll period
# unit seconds Units of poll period
# ttl 1 TTL of messages sent by Ganglia
-# mode multicast Ganglia network mode ('unicast' or 'mulitcast')
+# mode multicast Ganglia network mode ('unicast' or 'multicast')
# org.apache.spark.metrics.sink.JmxSink
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 0d5eb7065e..e9b1d2b2f4 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -19,7 +19,7 @@ You can access this interface by simply opening `http://<driver-node>:4040` in a
If multiple SparkContexts are running on the same host, they will bind to succesive ports
beginning with 4040 (4041, 4042, etc).
-Spark's Standlone Mode cluster manager also has its own
+Spark's Standalone Mode cluster manager also has its own
[web UI](spark-standalone.html#monitoring-and-logging).
Note that in both of these UIs, the tables are sortable by clicking their headers,
@@ -31,7 +31,7 @@ Spark has a configurable metrics system based on the
[Coda Hale Metrics Library](http://metrics.codahale.com/).
This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV
files. The metrics system is configured via a configuration file that Spark expects to be present
-at `$SPARK_HOME/conf/metrics.conf`. A custom file location can be specified via the
+at `$SPARK_HOME/conf/metrics.properties`. A custom file location can be specified via the
`spark.metrics.conf` [configuration property](configuration.html#spark-properties).
Spark's metrics are decoupled into different
_instances_ corresponding to Spark components. Within each instance, you can configure a
@@ -54,7 +54,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the
* `GraphiteSink`: Sends metrics to a Graphite node.
The syntax of the metrics configuration file is defined in an example configuration file,
-`$SPARK_HOME/conf/metrics.conf.template`.
+`$SPARK_HOME/conf/metrics.properties.template`.
# Advanced Instrumentation
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 3388c14ec4..51fb3a4f7f 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -10,7 +10,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide
# Installing Spark Standalone to a Cluster
-To install Spark Standlone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building).
+To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building).
# Starting a Cluster Manually