aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorkballou <kballou@devnulllabs.io>2014-07-31 14:58:52 -0700
committerJosh Rosen <joshrosen@apache.org>2014-07-31 14:58:52 -0700
commitcc820502fb08f71b03237103153c34487b2600b4 (patch)
tree8aa448dfafaf0bebd6578cee46626b4693cbd0a9 /docs
parente02136214a6c2635e88c36b1f530a97e975d83e3 (diff)
downloadspark-cc820502fb08f71b03237103153c34487b2600b4.tar.gz
spark-cc820502fb08f71b03237103153c34487b2600b4.tar.bz2
spark-cc820502fb08f71b03237103153c34487b2600b4.zip
Docs: monitoring, streaming programming guide
Fix several awkward wordings and grammatical issues in the following documents: * docs/monitoring.md * docs/streaming-programming-guide.md Author: kballou <kballou@devnulllabs.io> Closes #1662 from kennyballou/grammar_fixes and squashes the following commits: e1b8ad6 [kballou] Docs: monitoring, streaming programming guide
Diffstat (limited to 'docs')
-rw-r--r--docs/monitoring.md4
-rw-r--r--docs/streaming-programming-guide.md4
2 files changed, 4 insertions, 4 deletions
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 84073fe4d9..d07ec4a57a 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -33,7 +33,7 @@ application's UI after the application has finished.
If Spark is run on Mesos or YARN, it is still possible to reconstruct the UI of a finished
application through Spark's history server, provided that the application's event logs exist.
-You can start a the history server by executing:
+You can start the history server by executing:
./sbin/start-history-server.sh
@@ -106,7 +106,7 @@ follows:
<td>
Indicates whether the history server should use kerberos to login. This is useful
if the history server is accessing HDFS files on a secure Hadoop cluster. If this is
- true it looks uses the configs <code>spark.history.kerberos.principal</code> and
+ true, it uses the configs <code>spark.history.kerberos.principal</code> and
<code>spark.history.kerberos.keytab</code>.
</td>
</tr>
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 90a0eef60c..7b8b793343 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -939,7 +939,7 @@ Receiving multiple data streams can therefore be achieved by creating multiple i
and configuring them to receive different partitions of the data stream from the source(s).
For example, a single Kafka input stream receiving two topics of data can be split into two
Kafka input streams, each receiving only one topic. This would run two receivers on two workers,
-thus allowing data to received in parallel, and increasing overall throughput.
+thus allowing data to be received in parallel, and increasing overall throughput.
Another parameter that should be considered is the receiver's blocking interval. For most receivers,
the received data is coalesced together into large blocks of data before storing inside Spark's memory.
@@ -980,7 +980,7 @@ If the number of tasks launched per second is high (say, 50 or more per second),
of sending out tasks to the slaves maybe significant and will make it hard to achieve sub-second
latencies. The overhead can be reduced by the following changes:
-* **Task Serialization**: Using Kryo serialization for serializing tasks can reduced the task
+* **Task Serialization**: Using Kryo serialization for serializing tasks can reduce the task
sizes, and therefore reduce the time taken to send them to the slaves.
* **Execution mode**: Running Spark in Standalone mode or coarse-grained Mesos mode leads to