aboutsummaryrefslogtreecommitdiff
path: root/bin/run-example
diff options
context:
space:
mode:
authorjerryshao <sshao@hortonworks.com>2015-11-04 10:49:34 +0000
committerSean Owen <sowen@cloudera.com>2015-11-04 10:49:34 +0000
commit8aff36e91de0fee2f3f56c6d240bb203b5bb48ba (patch)
tree0afdded361cb75e7658053953abfdb484da78ced /bin/run-example
parent2692bdb7dbf36d6247f595d5fd0cb9cda89e1fdd (diff)
downloadspark-8aff36e91de0fee2f3f56c6d240bb203b5bb48ba.tar.gz
spark-8aff36e91de0fee2f3f56c6d240bb203b5bb48ba.tar.bz2
spark-8aff36e91de0fee2f3f56c6d240bb203b5bb48ba.zip
[SPARK-2960][DEPLOY] Support executing Spark from symlinks (reopen)
This PR is based on the work of roji to support running Spark scripts from symlinks. Thanks for the great work roji . Would you mind taking a look at this PR, thanks a lot. For releases like HDP and others, normally it will expose the Spark executables as symlinks and put in `PATH`, but current Spark's scripts do not support finding real path from symlink recursively, this will make spark fail to execute from symlink. This PR try to solve this issue by finding the absolute path from symlink. Instead of using `readlink -f` like what this PR (https://github.com/apache/spark/pull/2386) implemented is that `-f` is not support for Mac, so here manually seeking the path through loop. I've tested with Mac and Linux (Cent OS), looks fine. This PR did not fix the scripts under `sbin` folder, not sure if it needs to be fixed also? Please help to review, any comment is greatly appreciated. Author: jerryshao <sshao@hortonworks.com> Author: Shay Rojansky <roji@roji.org> Closes #8669 from jerryshao/SPARK-2960.
Diffstat (limited to 'bin/run-example')
-rwxr-xr-xbin/run-example18
1 files changed, 10 insertions, 8 deletions
diff --git a/bin/run-example b/bin/run-example
index 798e2caeb8..e1b0d5789b 100755
--- a/bin/run-example
+++ b/bin/run-example
@@ -17,11 +17,13 @@
# limitations under the License.
#
-FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
-export SPARK_HOME="$FWDIR"
-EXAMPLES_DIR="$FWDIR"/examples
+if [ -z "${SPARK_HOME}" ]; then
+ export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
+fi
+
+EXAMPLES_DIR="${SPARK_HOME}"/examples
-. "$FWDIR"/bin/load-spark-env.sh
+. "${SPARK_HOME}"/bin/load-spark-env.sh
if [ -n "$1" ]; then
EXAMPLE_CLASS="$1"
@@ -34,8 +36,8 @@ else
exit 1
fi
-if [ -f "$FWDIR/RELEASE" ]; then
- JAR_PATH="${FWDIR}/lib"
+if [ -f "${SPARK_HOME}/RELEASE" ]; then
+ JAR_PATH="${SPARK_HOME}/lib"
else
JAR_PATH="${EXAMPLES_DIR}/target/scala-${SPARK_SCALA_VERSION}"
fi
@@ -44,7 +46,7 @@ JAR_COUNT=0
for f in "${JAR_PATH}"/spark-examples-*hadoop*.jar; do
if [[ ! -e "$f" ]]; then
- echo "Failed to find Spark examples assembly in $FWDIR/lib or $FWDIR/examples/target" 1>&2
+ echo "Failed to find Spark examples assembly in ${SPARK_HOME}/lib or ${SPARK_HOME}/examples/target" 1>&2
echo "You need to build Spark before running this program" 1>&2
exit 1
fi
@@ -67,7 +69,7 @@ if [[ ! $EXAMPLE_CLASS == org.apache.spark.examples* ]]; then
EXAMPLE_CLASS="org.apache.spark.examples.$EXAMPLE_CLASS"
fi
-exec "$FWDIR"/bin/spark-submit \
+exec "${SPARK_HOME}"/bin/spark-submit \
--master $EXAMPLE_MASTER \
--class $EXAMPLE_CLASS \
"$SPARK_EXAMPLES_JAR" \