diff options
author | hyukjinkwon <gurwls223@gmail.com> | 2017-01-21 14:08:01 +0000 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2017-01-21 14:08:01 +0000 |
commit | 6113fe78a5195d3325690703b20000bed6e9efa5 (patch) | |
tree | 66356c364c450a7bfd12a149b0bda0fc12ef5bd0 /core/src | |
parent | bcdabaac93fc5527345754a9e10e6db5161007ef (diff) | |
download | spark-6113fe78a5195d3325690703b20000bed6e9efa5.tar.gz spark-6113fe78a5195d3325690703b20000bed6e9efa5.tar.bz2 spark-6113fe78a5195d3325690703b20000bed6e9efa5.zip |
[SPARK-19117][SPARK-18922][TESTS] Fix the rest of flaky, newly introduced and missed test failures on Windows
## What changes were proposed in this pull request?
**Failed tests**
```
org.apache.spark.sql.hive.execution.HiveQuerySuite:
- transform with SerDe3 *** FAILED ***
- transform with SerDe4 *** FAILED ***
```
```
org.apache.spark.sql.hive.execution.HiveDDLSuite:
- create hive serde table with new syntax *** FAILED ***
- add/drop partition with location - managed table *** FAILED ***
```
```
org.apache.spark.sql.hive.ParquetMetastoreSuite:
- Explicitly added partitions should be readable after load *** FAILED ***
- Non-partitioned table readable after load *** FAILED ***
```
**Aborted tests**
```
Exception encountered when attempting to run a suite with class name: org.apache.spark.sql.hive.execution.HiveSerDeSuite *** ABORTED *** (157 milliseconds)
org.apache.spark.sql.AnalysisException: LOAD DATA input path does not exist: C:projectssparksqlhive argetscala-2.11 est-classesdatafilessales.txt;
```
**Flaky tests(failed 9ish out of 10)**
```
org.apache.spark.scheduler.SparkListenerSuite:
- local metrics *** FAILED ***
```
## How was this patch tested?
Manually tested via AppVeyor.
**Failed tests**
```
org.apache.spark.sql.hive.execution.HiveQuerySuite:
- transform with SerDe3 !!! CANCELED !!! (0 milliseconds)
- transform with SerDe4 !!! CANCELED !!! (0 milliseconds)
```
```
org.apache.spark.sql.hive.execution.HiveDDLSuite:
- create hive serde table with new syntax (1 second, 672 milliseconds)
- add/drop partition with location - managed table (2 seconds, 391 milliseconds)
```
```
org.apache.spark.sql.hive.ParquetMetastoreSuite:
- Explicitly added partitions should be readable after load (609 milliseconds)
- Non-partitioned table readable after load (344 milliseconds)
```
**Aborted tests**
```
spark.sql.hive.execution.HiveSerDeSuite:
- Read with RegexSerDe (2 seconds, 142 milliseconds)
- Read and write with LazySimpleSerDe (tab separated) (2 seconds)
- Read with AvroSerDe (1 second, 47 milliseconds)
- Read Partitioned with AvroSerDe (1 second, 422 milliseconds)
```
**Flaky tests (failed 9ish out of 10)**
```
org.apache.spark.scheduler.SparkListenerSuite:
- local metrics (4 seconds, 562 milliseconds)
```
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #16586 from HyukjinKwon/set-path-appveyor.
Diffstat (limited to 'core/src')
-rw-r--r-- | core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala index e8a88d4909..f5575ce1e1 100644 --- a/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala +++ b/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala @@ -229,7 +229,7 @@ class SparkListenerSuite extends SparkFunSuite with LocalSparkContext with Match } val numSlices = 16 - val d = sc.parallelize(0 to 1e3.toInt, numSlices).map(w) + val d = sc.parallelize(0 to 10000, numSlices).map(w) d.count() sc.listenerBus.waitUntilEmpty(WAIT_TIMEOUT_MILLIS) listener.stageInfos.size should be (1) |