From 426042ad24a54b4b776085cbf4e1896464efc613 Mon Sep 17 00:00:00 2001 From: Thomas Graves Date: Thu, 27 Mar 2014 11:54:43 -0500 Subject: SPARK-1330 removed extra echo from comput_classpath.sh remove the extra echo which prevents spark-class from working. Note that I did not update the comment above it, which is also wrong because I'm not sure what it should do. Should hive only be included if explicitly built with sbt hive/assembly or should sbt assembly build it? Author: Thomas Graves Closes #241 from tgravescs/SPARK-1330 and squashes the following commits: b10d708 [Thomas Graves] SPARK-1330 removed extra echo from comput_classpath.sh --- bin/compute-classpath.sh | 1 - 1 file changed, 1 deletion(-) diff --git a/bin/compute-classpath.sh b/bin/compute-classpath.sh index d6f1ff9084..bef42df71c 100755 --- a/bin/compute-classpath.sh +++ b/bin/compute-classpath.sh @@ -36,7 +36,6 @@ CLASSPATH="$SPARK_CLASSPATH:$FWDIR/conf" # Hopefully we will find a way to avoid uber-jars entirely and deploy only the needed packages in # the future. if [ -f "$FWDIR"/sql/hive/target/scala-$SCALA_VERSION/spark-hive-assembly-*.jar ]; then - echo "Hive assembly found, including hive support. If this isn't desired run sbt hive/clean." # Datanucleus jars do not work if only included in the uberjar as plugin.xml metadata is lost. DATANUCLEUSJARS=$(JARS=("$FWDIR/lib_managed/jars"/datanucleus-*.jar); IFS=:; echo "${JARS[*]}") -- cgit v1.2.3