diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-06-29 15:00:41 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-06-29 15:00:41 -0700 |
commit | 9b1b3ae771babf127f64898d5dc110721597a760 (patch) | |
tree | de98cf6489f46b4e77c0932acdd269e1519e68fd /docs/img/triplet.png | |
parent | 8b5a8b25b9d29b7d0949d5663c7394b26154a836 (diff) | |
download | spark-9b1b3ae771babf127f64898d5dc110721597a760.tar.gz spark-9b1b3ae771babf127f64898d5dc110721597a760.tar.bz2 spark-9b1b3ae771babf127f64898d5dc110721597a760.zip |
[SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws non-intuitive exception
## What changes were proposed in this pull request?
This PR allows `emptyDataFrame.write` since the user didn't specify any partition columns.
**Before**
```scala
scala> spark.emptyDataFrame.write.parquet("/tmp/t1")
org.apache.spark.sql.AnalysisException: Cannot use all columns for partition columns;
scala> spark.emptyDataFrame.write.csv("/tmp/t1")
org.apache.spark.sql.AnalysisException: Cannot use all columns for partition columns;
```
After this PR, there occurs no exceptions and the created directory has only one file, `_SUCCESS`, as expected.
## How was this patch tested?
Pass the Jenkins tests including updated test cases.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #13730 from dongjoon-hyun/SPARK-16006.
Diffstat (limited to 'docs/img/triplet.png')
0 files changed, 0 insertions, 0 deletions