diff options
author | Wenchen Fan <wenchen@databricks.com> | 2017-01-17 23:37:59 -0800 |
---|---|---|
committer | gatorsmile <gatorsmile@gmail.com> | 2017-01-17 23:37:59 -0800 |
commit | 4494cd9716d64a6c7cfa548abadb5dd0c4c143a6 (patch) | |
tree | 6c06c19fb977106fc819a883889e9aa2ffefdcb9 /mllib/src | |
parent | e7f982b20d8a1c0db711e0dcfe26b2f39f98dd64 (diff) | |
download | spark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.tar.gz spark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.tar.bz2 spark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.zip |
[SPARK-18243][SQL] Port Hive writing to use FileFormat interface
## What changes were proposed in this pull request?
Inserting data into Hive tables has its own implementation that is distinct from data sources: `InsertIntoHiveTable`, `SparkHiveWriterContainer` and `SparkHiveDynamicPartitionWriterContainer`.
Note that one other major difference is that data source tables write directly to the final destination without using some staging directory, and then Spark itself adds the partitions/tables to the catalog. Hive tables actually write to some staging directory, and then call Hive metastore's loadPartition/loadTable function to load those data in. So we still need to keep `InsertIntoHiveTable` to put this special logic. In the future, we should think of writing to the hive table location directly, so that we don't need to call `loadTable`/`loadPartition` at the end and remove `InsertIntoHiveTable`.
This PR removes `SparkHiveWriterContainer` and `SparkHiveDynamicPartitionWriterContainer`, and create a `HiveFileFormat` to implement the write logic. In the future, we should also implement the read logic in `HiveFileFormat`.
## How was this patch tested?
existing tests
Author: Wenchen Fan <wenchen@databricks.com>
Closes #16517 from cloud-fan/insert-hive.
Diffstat (limited to 'mllib/src')
0 files changed, 0 insertions, 0 deletions