aboutsummaryrefslogtreecommitdiff
path: root/dev/run-tests.py
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@databricks.com>2015-12-30 12:47:42 -0800
committerJosh Rosen <joshrosen@databricks.com>2015-12-30 12:47:42 -0800
commit27a42c7108ced48a7f558990de2e4fc7ed340119 (patch)
treee65525f7dee5ceae053643ac3f5e8b4a1716272b /dev/run-tests.py
parentd1ca634db4ca9db7f0ba7ca38a0e03bcbfec23c9 (diff)
downloadspark-27a42c7108ced48a7f558990de2e4fc7ed340119.tar.gz
spark-27a42c7108ced48a7f558990de2e4fc7ed340119.tar.bz2
spark-27a42c7108ced48a7f558990de2e4fc7ed340119.zip
[SPARK-10359] Enumerate dependencies in a file and diff against it for new pull requests
This patch adds a new build check which enumerates Spark's resolved runtime classpath and saves it to a file, then diffs against that file to detect whether pull requests have introduced dependency changes. The aim of this check is to make it simpler to reason about whether pull request which modify the build have introduced new dependencies or changed transitive dependencies in a way that affects the final classpath. This supplants the checks added in SPARK-4123 / #5093, which are currently disabled due to bugs. This patch is based on pwendell's work in #8531. Closes #8531. Author: Josh Rosen <joshrosen@databricks.com> Author: Patrick Wendell <patrick@databricks.com> Closes #10461 from JoshRosen/SPARK-10359.
Diffstat (limited to 'dev/run-tests.py')
-rwxr-xr-xdev/run-tests.py8
1 files changed, 8 insertions, 0 deletions
diff --git a/dev/run-tests.py b/dev/run-tests.py
index 6129f87cf8..706e2d141c 100755
--- a/dev/run-tests.py
+++ b/dev/run-tests.py
@@ -417,6 +417,11 @@ def run_python_tests(test_modules, parallelism):
run_cmd(command)
+def run_build_tests():
+ set_title_and_block("Running build tests", "BLOCK_BUILD_TESTS")
+ run_cmd([os.path.join(SPARK_HOME, "dev", "test-dependencies.sh")])
+
+
def run_sparkr_tests():
set_title_and_block("Running SparkR tests", "BLOCK_SPARKR_UNIT_TESTS")
@@ -537,6 +542,9 @@ def main():
# if "DOCS" in changed_modules and test_env == "amplab_jenkins":
# build_spark_documentation()
+ if any(m.should_run_build_tests for m in test_modules):
+ run_build_tests()
+
# spark build
build_apache_spark(build_tool, hadoop_version)