aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2014-05-05 10:33:49 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-05-05 10:33:49 -0700
commit73b0cbcc241cca3d318ff74340e80b02f884acbd (patch)
tree8fd0bb9377bac871ed72daf9400911382eb5a99e
parentf2eb070acc81e60096ee8d4ddf8da2b24a11da72 (diff)
downloadspark-73b0cbcc241cca3d318ff74340e80b02f884acbd.tar.gz
spark-73b0cbcc241cca3d318ff74340e80b02f884acbd.tar.bz2
spark-73b0cbcc241cca3d318ff74340e80b02f884acbd.zip
SPARK-1556. jets3t dep doesn't update properly with newer Hadoop versions
See related discussion at https://github.com/apache/spark/pull/468 This PR may still overstep what you have in mind, but let me put it on the table to start. Besides fixing the issue, it has one substantive change, and that is to manage Hadoop-specific things only in Hadoop-related profiles. This does _not_ remove `yarn.version`. - Moves the YARN and Hadoop profiles together in pom.xml. Sorry that this makes the diff a little hard to grok but the changes are only as follows. - Removes `hadoop.major.version` - Introduce `hadoop-2.2` and `hadoop-2.3` profiles to control Hadoop-specific changes: - like the protobuf version issue - this was only 'solved' now by enabling YARN for 2.2+, which is really an orthogonal issue - like the jets3t version issue now - Hadoop profiles set an appropriate default `hadoop.version`, that can be overridden - _(YARN profiles in the parent now only exist to add the sub-module)_ - Fixes the jets3t dependency issue - and makes it a runtime dependency - and centralizes config of this guy in the parent pom - Updates build docs - Updates SBT build too - and fixes a regex problem along the way Author: Sean Owen <sowen@cloudera.com> Closes #629 from srowen/SPARK-1556 and squashes the following commits: c3fa967 [Sean Owen] Fix hadoop-2.4 profile typo in doc a2105fd [Sean Owen] Add hadoop-2.4 profile and don't set hadoop.version in profiles 274f4f9 [Sean Owen] Make jets3t a runtime dependency, and bring its exclusion up into parent config bbed826 [Sean Owen] Use jets3t 0.9.0 for Hadoop 2.3+ (and correct similar regex issue in SBT build) f21f356 [Sean Owen] Build changes to set up for jets3t fix
-rw-r--r--core/pom.xml6
-rw-r--r--docs/building-with-maven.md51
-rw-r--r--pom.xml84
-rw-r--r--project/SparkBuild.scala5
4 files changed, 94 insertions, 52 deletions
diff --git a/core/pom.xml b/core/pom.xml
index 36c71e67b5..c24c7be204 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -38,12 +38,6 @@
<dependency>
<groupId>net.java.dev.jets3t</groupId>
<artifactId>jets3t</artifactId>
- <exclusions>
- <exclusion>
- <groupId>commons-logging</groupId>
- <artifactId>commons-logging</artifactId>
- </exclusion>
- </exclusions>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
diff --git a/docs/building-with-maven.md b/docs/building-with-maven.md
index e447dfea3b..cac01ded60 100644
--- a/docs/building-with-maven.md
+++ b/docs/building-with-maven.md
@@ -29,9 +29,22 @@ You can fix this by setting the `MAVEN_OPTS` variable as discussed before.
## Specifying the Hadoop version ##
-Because HDFS is not protocol-compatible across versions, if you want to read from HDFS, you'll need to build Spark against the specific HDFS version in your environment. You can do this through the "hadoop.version" property. If unset, Spark will build against Hadoop 1.0.4 by default.
-
-For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:
+Because HDFS is not protocol-compatible across versions, if you want to read from HDFS, you'll need to build Spark against the specific HDFS version in your environment. You can do this through the "hadoop.version" property. If unset, Spark will build against Hadoop 1.0.4 by default. Note that certain build profiles are required for particular Hadoop versions:
+
+<table class="table">
+ <thead>
+ <tr><th>Hadoop version</th><th>Profile required</th></tr>
+ </thead>
+ <tbody>
+ <tr><td>0.23.x</td><td>hadoop-0.23</td></tr>
+ <tr><td>1.x to 2.1.x</td><td>(none)</td></tr>
+ <tr><td>2.2.x</td><td>hadoop-2.2</td></tr>
+ <tr><td>2.3.x</td><td>hadoop-2.3</td></tr>
+ <tr><td>2.4.x</td><td>hadoop-2.4</td></tr>
+ </tbody>
+</table>
+
+For Apache Hadoop versions 1.x, Cloudera CDH "mr1" distributions, and other Hadoop versions without YARN, use:
# Apache Hadoop 1.2.1
$ mvn -Dhadoop.version=1.2.1 -DskipTests clean package
@@ -42,22 +55,40 @@ For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions wit
# Apache Hadoop 0.23.x
$ mvn -Phadoop-0.23 -Dhadoop.version=0.23.7 -DskipTests clean package
-For Apache Hadoop 2.x, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, you can enable the "yarn-alpha" or "yarn" profile and set the "hadoop.version", "yarn.version" property. Note that Hadoop 0.23.X requires a special `-Phadoop-0.23` profile:
+For Apache Hadoop 2.x, 0.23.x, Cloudera CDH, and other Hadoop versions with YARN, you can enable the "yarn-alpha" or "yarn" profile and optionally set the "yarn.version" property if it is different from "hadoop.version". The additional build profile required depends on the YARN version:
+
+<table class="table">
+ <thead>
+ <tr><th>YARN version</th><th>Profile required</th></tr>
+ </thead>
+ <tbody>
+ <tr><td>0.23.x to 2.1.x</td><td>yarn-alpha</td></tr>
+ <tr><td>2.2.x and later</td><td>yarn</td></tr>
+ </tbody>
+</table>
+
+Examples:
# Apache Hadoop 2.0.5-alpha
$ mvn -Pyarn-alpha -Dhadoop.version=2.0.5-alpha -DskipTests clean package
- # Cloudera CDH 4.2.0 with MapReduce v2
+ # Cloudera CDH 4.2.0
$ mvn -Pyarn-alpha -Dhadoop.version=2.0.0-cdh4.2.0 -DskipTests clean package
- # Apache Hadoop 2.2.X (e.g. 2.2.0 as below) and newer
- $ mvn -Pyarn -Dhadoop.version=2.2.0 -DskipTests clean package
-
# Apache Hadoop 0.23.x
- $ mvn -Pyarn-alpha -Phadoop-0.23 -Dhadoop.version=0.23.7 -Dyarn.version=0.23.7 -DskipTests clean package
+ $ mvn -Pyarn-alpha -Phadoop-0.23 -Dhadoop.version=0.23.7 -DskipTests clean package
+
+ # Apache Hadoop 2.2.X
+ $ mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package
+
+ # Apache Hadoop 2.3.X
+ $ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
+
+ # Apache Hadoop 2.4.X
+ $ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
# Different versions of HDFS and YARN.
- $ mvn -Pyarn-alpha -Dhadoop.version=2.3.0 -Dyarn.version=0.23.7 -DskipTests clean package
+ $ mvn -Pyarn-alpha -Phadoop-2.3 -Dhadoop.version=2.3.0 -Dyarn.version=0.23.7 -DskipTests clean package
## Spark Tests in Maven ##
diff --git a/pom.xml b/pom.xml
index 08c3ac6443..e4b5c36d69 100644
--- a/pom.xml
+++ b/pom.xml
@@ -129,6 +129,7 @@
<chill.version>0.3.6</chill.version>
<codahale.metrics.version>3.0.0</codahale.metrics.version>
<avro.version>1.7.4</avro.version>
+ <jets3t.version>0.7.1</jets3t.version>
<PermGen>64m</PermGen>
<MaxPermGen>512m</MaxPermGen>
@@ -560,10 +561,18 @@
</exclusion>
</exclusions>
</dependency>
+ <!-- See SPARK-1556 for info on this dependency: -->
<dependency>
<groupId>net.java.dev.jets3t</groupId>
<artifactId>jets3t</artifactId>
- <version>0.7.1</version>
+ <version>${jets3t.version}</version>
+ <scope>runtime</scope>
+ <exclusions>
+ <exclusion>
+ <groupId>commons-logging</groupId>
+ <artifactId>commons-logging</artifactId>
+ </exclusion>
+ </exclusions>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
@@ -843,36 +852,6 @@
</build>
<profiles>
- <!-- SPARK-1121: Adds an explicit dependency on Avro to work around a Hadoop 0.23.X issue -->
- <profile>
- <id>hadoop-0.23</id>
- <dependencies>
- <dependency>
- <groupId>org.apache.avro</groupId>
- <artifactId>avro</artifactId>
- </dependency>
- </dependencies>
- </profile>
-
- <profile>
- <id>yarn-alpha</id>
- <properties>
- <hadoop.major.version>2</hadoop.major.version>
- <!-- 0.23.* is same as 2.0.* - except hardened to run production jobs -->
- <hadoop.version>0.23.7</hadoop.version>
- <!--<hadoop.version>2.0.5-alpha</hadoop.version> -->
- </properties>
- <dependencies>
- <dependency>
- <groupId>org.apache.avro</groupId>
- <artifactId>avro</artifactId>
- </dependency>
- </dependencies>
- <modules>
- <module>yarn</module>
- </modules>
-
- </profile>
<!-- Ganglia integration is not included by default due to LGPL-licensed code -->
<profile>
@@ -907,17 +886,54 @@
</profile>
+ <!-- A series of build profiles where customizations for particular Hadoop releases can be made -->
+
<profile>
- <id>yarn</id>
+ <id>hadoop-0.23</id>
+ <!-- SPARK-1121: Adds an explicit dependency on Avro to work around a Hadoop 0.23.X issue -->
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.avro</groupId>
+ <artifactId>avro</artifactId>
+ </dependency>
+ </dependencies>
+ </profile>
+
+ <profile>
+ <id>hadoop-2.2</id>
+ <properties>
+ <protobuf.version>2.5.0</protobuf.version>
+ </properties>
+ </profile>
+
+ <profile>
+ <id>hadoop-2.3</id>
<properties>
- <hadoop.major.version>2</hadoop.major.version>
- <hadoop.version>2.2.0</hadoop.version>
<protobuf.version>2.5.0</protobuf.version>
+ <jets3t.version>0.9.0</jets3t.version>
</properties>
+ </profile>
+
+ <profile>
+ <id>hadoop-2.4</id>
+ <properties>
+ <protobuf.version>2.5.0</protobuf.version>
+ <jets3t.version>0.9.0</jets3t.version>
+ </properties>
+ </profile>
+
+ <profile>
+ <id>yarn-alpha</id>
<modules>
<module>yarn</module>
</modules>
+ </profile>
+ <profile>
+ <id>yarn</id>
+ <modules>
+ <module>yarn</module>
+ </modules>
</profile>
<!-- Build without Hadoop dependencies that are included in some runtime environments. -->
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala
index 19aa3c0607..a2597e3e6d 100644
--- a/project/SparkBuild.scala
+++ b/project/SparkBuild.scala
@@ -95,7 +95,7 @@ object SparkBuild extends Build {
lazy val hadoopVersion = Properties.envOrElse("SPARK_HADOOP_VERSION", DEFAULT_HADOOP_VERSION)
lazy val isNewHadoop = Properties.envOrNone("SPARK_IS_NEW_HADOOP") match {
case None => {
- val isNewHadoopVersion = "2.[2-9]+".r.findFirstIn(hadoopVersion).isDefined
+ val isNewHadoopVersion = "^2\\.[2-9]+".r.findFirstIn(hadoopVersion).isDefined
(isNewHadoopVersion|| DEFAULT_IS_NEW_HADOOP)
}
case Some(v) => v.toBoolean
@@ -297,6 +297,7 @@ object SparkBuild extends Build {
val chillVersion = "0.3.6"
val codahaleMetricsVersion = "3.0.0"
val jblasVersion = "1.2.3"
+ val jets3tVersion = if ("^2\\.[3-9]+".r.findFirstIn(hadoopVersion).isDefined) "0.9.0" else "0.7.1"
val jettyVersion = "8.1.14.v20131031"
val hiveVersion = "0.12.0"
val parquetVersion = "1.3.2"
@@ -343,7 +344,7 @@ object SparkBuild extends Build {
"colt" % "colt" % "1.2.0",
"org.apache.mesos" % "mesos" % "0.13.0",
"commons-net" % "commons-net" % "2.2",
- "net.java.dev.jets3t" % "jets3t" % "0.7.1" excludeAll(excludeCommonsLogging),
+ "net.java.dev.jets3t" % "jets3t" % jets3tVersion excludeAll(excludeCommonsLogging),
"org.apache.derby" % "derby" % "10.4.2.0" % "test",
"org.apache.hadoop" % hadoopClient % hadoopVersion excludeAll(excludeNetty, excludeAsm, excludeCommonsLogging, excludeSLF4J, excludeOldAsm),
"org.apache.curator" % "curator-recipes" % "2.4.0" excludeAll(excludeNetty),