aboutsummaryrefslogtreecommitdiff
path: root/python/run-tests
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-04-21 17:49:55 -0700
committerReynold Xin <rxin@databricks.com>2015-04-21 17:49:55 -0700
commit3134c3fe495862b7687b5aa00d3344d09cd5e08e (patch)
treeed556b21bbaad651c7893b6b2dcb53f304100785 /python/run-tests
parente72c16e30d85cdc394d318b5551698885cfda9b8 (diff)
downloadspark-3134c3fe495862b7687b5aa00d3344d09cd5e08e.tar.gz
spark-3134c3fe495862b7687b5aa00d3344d09cd5e08e.tar.bz2
spark-3134c3fe495862b7687b5aa00d3344d09cd5e08e.zip
[SPARK-6953] [PySpark] speed up python tests
This PR try to speed up some python tests: ``` tests.py 144s -> 103s -41s mllib/classification.py 24s -> 17s -7s mllib/regression.py 27s -> 15s -12s mllib/tree.py 27s -> 13s -14s mllib/tests.py 64s -> 31s -33s streaming/tests.py 185s -> 84s -101s ``` Considering python3, the total saving will be 558s (almost 10 minutes) (core, and streaming run three times, mllib runs twice). During testing, it will show used time for each test file: ``` Run core tests ... Running test: pyspark/rdd.py ... ok (22s) Running test: pyspark/context.py ... ok (16s) Running test: pyspark/conf.py ... ok (4s) Running test: pyspark/broadcast.py ... ok (4s) Running test: pyspark/accumulators.py ... ok (4s) Running test: pyspark/serializers.py ... ok (6s) Running test: pyspark/profiler.py ... ok (5s) Running test: pyspark/shuffle.py ... ok (1s) Running test: pyspark/tests.py ... ok (103s) 144s ``` Author: Reynold Xin <rxin@databricks.com> Author: Xiangrui Meng <meng@databricks.com> Closes #5605 from rxin/python-tests-speed and squashes the following commits: d08542d [Reynold Xin] Merge pull request #14 from mengxr/SPARK-6953 89321ee [Xiangrui Meng] fix seed in tests 3ad2387 [Reynold Xin] Merge pull request #5427 from davies/python_tests
Diffstat (limited to 'python/run-tests')
-rwxr-xr-xpython/run-tests13
1 files changed, 8 insertions, 5 deletions
diff --git a/python/run-tests b/python/run-tests
index ed3e819ef3..88b63b84fd 100755
--- a/python/run-tests
+++ b/python/run-tests
@@ -28,6 +28,7 @@ cd "$FWDIR/python"
FAILED=0
LOG_FILE=unit-tests.log
+START=$(date +"%s")
rm -f $LOG_FILE
@@ -35,8 +36,8 @@ rm -f $LOG_FILE
rm -rf metastore warehouse
function run_test() {
- echo "Running test: $1" | tee -a $LOG_FILE
-
+ echo -en "Running test: $1 ... " | tee -a $LOG_FILE
+ start=$(date +"%s")
SPARK_TESTING=1 time "$FWDIR"/bin/pyspark $1 > $LOG_FILE 2>&1
FAILED=$((PIPESTATUS[0]||$FAILED))
@@ -48,6 +49,9 @@ function run_test() {
echo "Had test failures; see logs."
echo -en "\033[0m" # No color
exit -1
+ else
+ now=$(date +"%s")
+ echo "ok ($(($now - $start))s)"
fi
}
@@ -161,9 +165,8 @@ if [ $(which pypy) ]; then
fi
if [[ $FAILED == 0 ]]; then
- echo -en "\033[32m" # Green
- echo "Tests passed."
- echo -en "\033[0m" # No color
+ now=$(date +"%s")
+ echo -e "\033[32mTests passed \033[0min $(($now - $START)) seconds"
fi
# TODO: in the long-run, it would be nice to use a test runner like `nose`.