diff options
author | Patrick Wendell <pwendell@gmail.com> | 2014-10-30 20:15:36 -0700 |
---|---|---|
committer | Aaron Davidson <aaron@databricks.com> | 2014-10-30 20:15:36 -0700 |
commit | 0734d09320fe37edd3a02718511cda0bda852478 (patch) | |
tree | bf3bb586abd9c186b2a0a63dbd53ee43a00b885f /project | |
parent | 26d31d15fda3f63707a28d1a1115770ad127cf8f (diff) | |
download | spark-0734d09320fe37edd3a02718511cda0bda852478.tar.gz spark-0734d09320fe37edd3a02718511cda0bda852478.tar.bz2 spark-0734d09320fe37edd3a02718511cda0bda852478.zip |
HOTFIX: Clean up build in network module.
This is currently breaking the package build for some people (including me).
This patch does some general clean-up which also fixes the current issue.
- Uses consistent artifact naming
- Adds sbt support for this module
- Changes tests to use scalatest (fixes the original issue[1])
One thing to note, it turns out that scalatest when invoked in the
Maven build doesn't succesfully detect JUnit Java tests. This is
a long standing issue, I noticed it applies to all of our current
test suites as well. I've created SPARK-4159 to fix this.
[1] The original issue is that we need to allocate extra memory
for the tests, happens by default in our scalatest configuration.
Author: Patrick Wendell <pwendell@gmail.com>
Closes #3025 from pwendell/hotfix and squashes the following commits:
faa9053 [Patrick Wendell] HOTFIX: Clean up build in network module.
Diffstat (limited to 'project')
-rw-r--r-- | project/SparkBuild.scala | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala index 6d5eb681c6..77083518bb 100644 --- a/project/SparkBuild.scala +++ b/project/SparkBuild.scala @@ -31,10 +31,10 @@ object BuildCommons { private val buildLocation = file(".").getAbsoluteFile.getParentFile val allProjects@Seq(bagel, catalyst, core, graphx, hive, hiveThriftServer, mllib, repl, - sql, streaming, streamingFlumeSink, streamingFlume, streamingKafka, streamingMqtt, + sql, networkCommon, streaming, streamingFlumeSink, streamingFlume, streamingKafka, streamingMqtt, streamingTwitter, streamingZeromq) = Seq("bagel", "catalyst", "core", "graphx", "hive", "hive-thriftserver", "mllib", "repl", - "sql", "streaming", "streaming-flume-sink", "streaming-flume", "streaming-kafka", + "sql", "network-common", "streaming", "streaming-flume-sink", "streaming-flume", "streaming-kafka", "streaming-mqtt", "streaming-twitter", "streaming-zeromq").map(ProjectRef(buildLocation, _)) val optionallyEnabledProjects@Seq(yarn, yarnStable, yarnAlpha, java8Tests, sparkGangliaLgpl, sparkKinesisAsl) = @@ -142,7 +142,9 @@ object SparkBuild extends PomBuild { // TODO: Add Sql to mima checks allProjects.filterNot(x => Seq(spark, sql, hive, hiveThriftServer, catalyst, repl, - streamingFlumeSink).contains(x)).foreach(x => enable(MimaBuild.mimaSettings(sparkHome, x))(x)) + streamingFlumeSink, networkCommon).contains(x)).foreach { + x => enable(MimaBuild.mimaSettings(sparkHome, x))(x) + } /* Enable Assembly for all assembly projects */ assemblyProjects.foreach(enable(Assembly.settings)) |