aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorStephen Haberman <stephen@exigencecorp.com>2013-02-24 22:08:14 -0600
committerStephen Haberman <stephen@exigencecorp.com>2013-02-24 22:08:14 -0600
commit44032bc476be9f334d17db3b8963a8deb973123c (patch)
treee1c24996382633675ef78d0a5bc39ec007982f02 /docs
parent4281e579c236d0125f44f5ca1d999adb5f894c24 (diff)
parent3b9f929467f3b14e780df459919a4d6c0c7ee772 (diff)
downloadspark-44032bc476be9f334d17db3b8963a8deb973123c.tar.gz
spark-44032bc476be9f334d17db3b8963a8deb973123c.tar.bz2
spark-44032bc476be9f334d17db3b8963a8deb973123c.zip
Merge branch 'master' into bettersplits
Conflicts: core/src/main/scala/spark/RDD.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala core/src/test/scala/spark/ShuffleSuite.scala
Diffstat (limited to 'docs')
-rw-r--r--docs/_config.yml1
-rw-r--r--docs/configuration.md10
-rw-r--r--docs/contributing-to-spark.md2
-rw-r--r--docs/spark-standalone.md8
-rw-r--r--docs/tuning.md2
5 files changed, 20 insertions, 3 deletions
diff --git a/docs/_config.yml b/docs/_config.yml
index 2bd2eecc86..09617e4a1e 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -7,3 +7,4 @@ SPARK_VERSION: 0.7.0-SNAPSHOT
SPARK_VERSION_SHORT: 0.7.0
SCALA_VERSION: 2.9.2
MESOS_VERSION: 0.9.0-incubating
+SPARK_ISSUE_TRACKER_URL: https://spark-project.atlassian.net
diff --git a/docs/configuration.md b/docs/configuration.md
index a7054b4321..f1ca77aa78 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -198,6 +198,14 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
+ <td>spark.worker.timeout</td>
+ <td>60</td>
+ <td>
+ Number of seconds after which the standalone deploy master considers a worker lost if it
+ receives no heartbeats.
+ </td>
+</tr>
+<tr>
<td>spark.akka.frameSize</td>
<td>10</td>
<td>
@@ -218,7 +226,7 @@ Apart from these, the following properties are also available, and may be useful
<td>spark.akka.timeout</td>
<td>20</td>
<td>
- Communication timeout between Spark nodes.
+ Communication timeout between Spark nodes, in seconds.
</td>
</tr>
<tr>
diff --git a/docs/contributing-to-spark.md b/docs/contributing-to-spark.md
index c6e01c62d8..50feeb2d6c 100644
--- a/docs/contributing-to-spark.md
+++ b/docs/contributing-to-spark.md
@@ -15,7 +15,7 @@ The Spark team welcomes contributions in the form of GitHub pull requests. Here
But first, make sure that you have [configured a spark-env.sh](configuration.html) with at least
`SCALA_HOME`, as some of the tests try to spawn subprocesses using this.
- Add new unit tests for your code. We use [ScalaTest](http://www.scalatest.org/) for testing. Just add a new Suite in `core/src/test`, or methods to an existing Suite.
-- If you'd like to report a bug but don't have time to fix it, you can still post it to our [issues page](https://github.com/mesos/spark/issues), or email the [mailing list](http://www.spark-project.org/mailing-lists.html).
+- If you'd like to report a bug but don't have time to fix it, you can still post it to our [issue tracker]({{site.SPARK_ISSUE_TRACKER_URL}}), or email the [mailing list](http://www.spark-project.org/mailing-lists.html).
# Licensing of Contributions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index bf296221b8..3986c0c79d 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -115,6 +115,14 @@ You can optionally configure the cluster further by setting environment variable
<td><code>SPARK_WORKER_WEBUI_PORT</code></td>
<td>Port for the worker web UI (default: 8081)</td>
</tr>
+ <tr>
+ <td><code>SPARK_DAEMON_MEMORY</code></td>
+ <td>Memory to allocate to the Spark master and worker daemons themselves (default: 512m)</td>
+ </tr>
+ <tr>
+ <td><code>SPARK_DAEMON_JAVA_OPTS</code></td>
+ <td>JVM options for the Spark master and worker daemons themselves (default: none)</td>
+ </tr>
</table>
diff --git a/docs/tuning.md b/docs/tuning.md
index e9b4d6717c..843380b9a2 100644
--- a/docs/tuning.md
+++ b/docs/tuning.md
@@ -233,7 +233,7 @@ number of cores in your clusters.
## Broadcasting Large Variables
-Using the [broadcast functionality](scala-programming-guide#broadcast-variables)
+Using the [broadcast functionality](scala-programming-guide.html#broadcast-variables)
available in `SparkContext` can greatly reduce the size of each serialized task, and the cost
of launching a job over a cluster. If your tasks use any large object from the driver program
inside of them (e.g. a static lookup table), consider turning it into a broadcast variable.