From 0ee44c225e38abbf3382be6e9555ab9a35424a54 Mon Sep 17 00:00:00 2001 From: Denny Date: Wed, 1 Aug 2012 13:17:31 -0700 Subject: Spark standalone mode cluster scripts. Heavily inspired by Hadoop cluster scripts ;-) --- conf/slaves | 2 ++ conf/spark-env.sh.template | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) create mode 100644 conf/slaves (limited to 'conf') diff --git a/conf/slaves b/conf/slaves new file mode 100644 index 0000000000..6e315a8540 --- /dev/null +++ b/conf/slaves @@ -0,0 +1,2 @@ +# A Spark Worker will be started on each of the machines listes below. +localhost \ No newline at end of file diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template index 532a635a1b..c09af42717 100755 --- a/conf/spark-env.sh.template +++ b/conf/spark-env.sh.template @@ -9,5 +9,5 @@ # - SPARK_MEM, to change the amount of memory used per node (this should # be in the same format as the JVM's -Xmx option, e.g. 300m or 1g). # - SPARK_LIBRARY_PATH, to add extra search paths for native libraries. - +# - SPARK_MASTER_PORT, to start the spark master on a different port (standalone mode only) -- cgit v1.2.3