aboutsummaryrefslogtreecommitdiff
path: root/launcher
diff options
context:
space:
mode:
authorWenchen Fan <wenchen@databricks.com>2017-01-17 23:37:59 -0800
committergatorsmile <gatorsmile@gmail.com>2017-01-17 23:37:59 -0800
commit4494cd9716d64a6c7cfa548abadb5dd0c4c143a6 (patch)
tree6c06c19fb977106fc819a883889e9aa2ffefdcb9 /launcher
parente7f982b20d8a1c0db711e0dcfe26b2f39f98dd64 (diff)
downloadspark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.tar.gz
spark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.tar.bz2
spark-4494cd9716d64a6c7cfa548abadb5dd0c4c143a6.zip
[SPARK-18243][SQL] Port Hive writing to use FileFormat interface
## What changes were proposed in this pull request? Inserting data into Hive tables has its own implementation that is distinct from data sources: `InsertIntoHiveTable`, `SparkHiveWriterContainer` and `SparkHiveDynamicPartitionWriterContainer`. Note that one other major difference is that data source tables write directly to the final destination without using some staging directory, and then Spark itself adds the partitions/tables to the catalog. Hive tables actually write to some staging directory, and then call Hive metastore's loadPartition/loadTable function to load those data in. So we still need to keep `InsertIntoHiveTable` to put this special logic. In the future, we should think of writing to the hive table location directly, so that we don't need to call `loadTable`/`loadPartition` at the end and remove `InsertIntoHiveTable`. This PR removes `SparkHiveWriterContainer` and `SparkHiveDynamicPartitionWriterContainer`, and create a `HiveFileFormat` to implement the write logic. In the future, we should also implement the read logic in `HiveFileFormat`. ## How was this patch tested? existing tests Author: Wenchen Fan <wenchen@databricks.com> Closes #16517 from cloud-fan/insert-hive.
Diffstat (limited to 'launcher')
0 files changed, 0 insertions, 0 deletions