aboutsummaryrefslogtreecommitdiff
path: root/sql/core/src/main
diff options
context:
space:
mode:
authorwindpiger <songjun@outlook.com>2017-02-28 11:59:18 -0800
committerWenchen Fan <wenchen@databricks.com>2017-02-28 11:59:18 -0800
commitce233f18e381fa1ea00be74ca26e97d35baa6c9c (patch)
tree3ea6727866bbd54ff34e57c93e91d7b6481f2ce9 /sql/core/src/main
parent9734a928a75d29ea202e9f309f92ca4637d35671 (diff)
downloadspark-ce233f18e381fa1ea00be74ca26e97d35baa6c9c.tar.gz
spark-ce233f18e381fa1ea00be74ca26e97d35baa6c9c.tar.bz2
spark-ce233f18e381fa1ea00be74ca26e97d35baa6c9c.zip
[SPARK-19463][SQL] refresh cache after the InsertIntoHadoopFsRelationCommand
## What changes were proposed in this pull request? If we first cache a DataSource table, then we insert some data into the table, we should refresh the data in the cache after the insert command. ## How was this patch tested? unit test added Author: windpiger <songjun@outlook.com> Closes #16809 from windpiger/refreshCacheAfterInsert.
Diffstat (limited to 'sql/core/src/main')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala3
1 files changed, 3 insertions, 0 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
index 652bcc8331..19b51d4d95 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
@@ -147,7 +147,10 @@ case class InsertIntoHadoopFsRelationCommand(
refreshFunction = refreshPartitionsCallback,
options = options)
+ // refresh cached files in FileIndex
fileIndex.foreach(_.refresh())
+ // refresh data cache if table is cached
+ sparkSession.catalog.refreshByPath(outputPath.toString)
} else {
logInfo("Skipping insertion into a relation that already exists.")
}