diff options
author | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-02-06 14:34:46 -0800 |
---|---|---|
committer | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-02-10 21:59:41 -0800 |
commit | 05d2e94838d5b728df203d87708beaf3f4aa4c81 (patch) | |
tree | e20e3be665e13fab837d973ba20e1700676582e6 | |
parent | 8c66c4996220e7ea77aa9e307a744635b9576e5e (diff) | |
download | spark-05d2e94838d5b728df203d87708beaf3f4aa4c81.tar.gz spark-05d2e94838d5b728df203d87708beaf3f4aa4c81.tar.bz2 spark-05d2e94838d5b728df203d87708beaf3f4aa4c81.zip |
Use a separate memory setting for standalone cluster daemons
Conflicts:
docs/_config.yml
-rw-r--r-- | docs/configuration.md | 10 | ||||
-rw-r--r-- | docs/spark-standalone.md | 8 | ||||
-rwxr-xr-x | run | 12 |
3 files changed, 29 insertions, 1 deletions
diff --git a/docs/configuration.md b/docs/configuration.md index a7054b4321..f1ca77aa78 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -198,6 +198,14 @@ Apart from these, the following properties are also available, and may be useful </td> </tr> <tr> + <td>spark.worker.timeout</td> + <td>60</td> + <td> + Number of seconds after which the standalone deploy master considers a worker lost if it + receives no heartbeats. + </td> +</tr> +<tr> <td>spark.akka.frameSize</td> <td>10</td> <td> @@ -218,7 +226,7 @@ Apart from these, the following properties are also available, and may be useful <td>spark.akka.timeout</td> <td>20</td> <td> - Communication timeout between Spark nodes. + Communication timeout between Spark nodes, in seconds. </td> </tr> <tr> diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index bf296221b8..3986c0c79d 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -115,6 +115,14 @@ You can optionally configure the cluster further by setting environment variable <td><code>SPARK_WORKER_WEBUI_PORT</code></td> <td>Port for the worker web UI (default: 8081)</td> </tr> + <tr> + <td><code>SPARK_DAEMON_MEMORY</code></td> + <td>Memory to allocate to the Spark master and worker daemons themselves (default: 512m)</td> + </tr> + <tr> + <td><code>SPARK_DAEMON_JAVA_OPTS</code></td> + <td>JVM options for the Spark master and worker daemons themselves (default: none)</td> + </tr> </table> @@ -13,6 +13,18 @@ if [ -e $FWDIR/conf/spark-env.sh ] ; then . $FWDIR/conf/spark-env.sh fi +if [ -z "$1" ]; then + echo "Usage: run <spark-class> [<args>]" >&2 + exit 1 +fi + +# If this is a standalone cluster daemon, reset SPARK_JAVA_OPTS and SPARK_MEM to reasonable +# values for that; it doesn't need a lot +if [ "$1" = "spark.deploy.master.Master" -o "$1" = "spark.deploy.worker.Worker" ]; then + SPARK_MEM=${SPARK_DAEMON_MEMORY:-512m} + SPARK_JAVA_OPTS=$SPARK_DAEMON_JAVA_OPTS # Empty by default +fi + if [ "$SPARK_LAUNCH_WITH_SCALA" == "1" ]; then if [ `command -v scala` ]; then RUNNER="scala" |