aboutsummaryrefslogtreecommitdiff
path: root/sbin/start-slaves.sh
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2016-02-26 22:35:12 -0800
committerReynold Xin <rxin@databricks.com>2016-02-26 22:35:12 -0800
commit59e3e10be2f9a1c53979ca72c038adb4fa17ca64 (patch)
tree3d6b2246738484273d36d0ccbec66b733930a3e0 /sbin/start-slaves.sh
parentf77dc4e1e202942aa8393fb5d8f492863973fe17 (diff)
downloadspark-59e3e10be2f9a1c53979ca72c038adb4fa17ca64.tar.gz
spark-59e3e10be2f9a1c53979ca72c038adb4fa17ca64.tar.bz2
spark-59e3e10be2f9a1c53979ca72c038adb4fa17ca64.zip
[SPARK-13521][BUILD] Remove reference to Tachyon in cluster & release scripts
## What changes were proposed in this pull request? We provide a very limited set of cluster management script in Spark for Tachyon, although Tachyon itself provides a much better version of it. Given now Spark users can simply use Tachyon as a normal file system and does not require extensive configurations, we can remove this management capabilities to simplify Spark bash scripts. Note that this also reduces coupling between a 3rd party external system and Spark's release scripts, and would eliminate possibility for failures such as Tachyon being renamed or the tar balls being relocated. ## How was this patch tested? N/A Author: Reynold Xin <rxin@databricks.com> Closes #11400 from rxin/release-script.
Diffstat (limited to 'sbin/start-slaves.sh')
-rwxr-xr-xsbin/start-slaves.sh22
1 files changed, 0 insertions, 22 deletions
diff --git a/sbin/start-slaves.sh b/sbin/start-slaves.sh
index 51ca81e053..5bf2b83b42 100755
--- a/sbin/start-slaves.sh
+++ b/sbin/start-slaves.sh
@@ -23,21 +23,6 @@ if [ -z "${SPARK_HOME}" ]; then
export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
-START_TACHYON=false
-
-while (( "$#" )); do
-case $1 in
- --with-tachyon)
- if [ ! -e "${SPARK_HOME}/sbin"/../tachyon/bin/tachyon ]; then
- echo "Error: --with-tachyon specified, but tachyon not found."
- exit -1
- fi
- START_TACHYON=true
- ;;
- esac
-shift
-done
-
. "${SPARK_HOME}/sbin/spark-config.sh"
. "${SPARK_HOME}/bin/load-spark-env.sh"
@@ -50,12 +35,5 @@ if [ "$SPARK_MASTER_IP" = "" ]; then
SPARK_MASTER_IP="`hostname`"
fi
-if [ "$START_TACHYON" == "true" ]; then
- "${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; "${SPARK_HOME}/sbin"/../tachyon/bin/tachyon bootstrap-conf "$SPARK_MASTER_IP"
-
- # set -t so we can call sudo
- SPARK_SSH_OPTS="-o StrictHostKeyChecking=no -t" "${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; "${SPARK_HOME}/tachyon/bin/tachyon-start.sh" worker SudoMount \; sleep 1
-fi
-
# Launch the slaves
"${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; "${SPARK_HOME}/sbin/start-slave.sh" "spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT"