diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-11-21 16:14:59 -0500 |
---|---|---|
committer | Andrew Or <andrewor14@gmail.com> | 2016-11-21 16:14:59 -0500 |
commit | ddd02f50bb7458410d65427321efc75da5e65224 (patch) | |
tree | 9658ae087306cd4643e7ffed98c40ba661533819 | |
parent | 70176871ae10509f1a727a96e96b3da7762605b1 (diff) | |
download | spark-ddd02f50bb7458410d65427321efc75da5e65224.tar.gz spark-ddd02f50bb7458410d65427321efc75da5e65224.tar.bz2 spark-ddd02f50bb7458410d65427321efc75da5e65224.zip |
[SPARK-18517][SQL] DROP TABLE IF EXISTS should not warn for non-existing tables
## What changes were proposed in this pull request?
Currently, `DROP TABLE IF EXISTS` shows warning for non-existing tables. However, it had better be quiet for this case by definition of the command.
**BEFORE**
```scala
scala> sql("DROP TABLE IF EXISTS nonexist")
16/11/20 20:48:26 WARN DropTableCommand: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'nonexist' not found in database 'default';
```
**AFTER**
```scala
scala> sql("DROP TABLE IF EXISTS nonexist")
res0: org.apache.spark.sql.DataFrame = []
```
## How was this patch tested?
Manual because this is related to the warning messages instead of exceptions.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #15953 from dongjoon-hyun/SPARK-18517.
-rw-r--r-- | sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala index 588aa05c37..d80b000bcc 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala @@ -28,7 +28,7 @@ import org.apache.hadoop.mapred.{FileInputFormat, JobConf} import org.apache.spark.sql.{AnalysisException, Row, SparkSession} import org.apache.spark.sql.catalyst.TableIdentifier -import org.apache.spark.sql.catalyst.analysis.Resolver +import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, Resolver} import org.apache.spark.sql.catalyst.catalog._ import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference, BinaryComparison} @@ -203,6 +203,7 @@ case class DropTableCommand( sparkSession.sharedState.cacheManager.uncacheQuery( sparkSession.table(tableName.quotedString)) } catch { + case _: NoSuchTableException if ifExists => case NonFatal(e) => log.warn(e.toString, e) } catalog.refreshTable(tableName) |