aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/sql/utils.py
diff options
context:
space:
mode:
authorgatorsmile <gatorsmile@gmail.com>2016-05-10 11:25:39 -0700
committerAndrew Or <andrew@databricks.com>2016-05-10 11:25:55 -0700
commit5c6b0855787c080d3e233eb09c05c025395e7cb3 (patch)
treeba75170f0e9629e540d9ef5924fbcea185807637 /python/pyspark/sql/utils.py
parented0b4070fb50054b1ecf66ff6c32458a4967dfd3 (diff)
downloadspark-5c6b0855787c080d3e233eb09c05c025395e7cb3.tar.gz
spark-5c6b0855787c080d3e233eb09c05c025395e7cb3.tar.bz2
spark-5c6b0855787c080d3e233eb09c05c025395e7cb3.zip
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw exceptions when there is an invalid metadata operation, let's do it in SessionCatalog. - [X] The first step is to unify the error messages issued in Hive-specific Session Catalog and general Session Catalog. - [X] The second step is to verify the inputs of metadata operations for partitioning-related operations. This is moved to a separate PR: https://github.com/apache/spark/pull/12801 - [X] The third step is to add database existence verification in `SessionCatalog` - [X] The fourth step is to add table existence verification in `SessionCatalog` - [X] The fifth step is to add function existence verification in `SessionCatalog` Add test cases and verify the error messages we issued Author: gatorsmile <gatorsmile@gmail.com> Author: xiaoli <lixiao1983@gmail.com> Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local> Closes #12385 from gatorsmile/verifySessionAPIs.
Diffstat (limited to 'python/pyspark/sql/utils.py')
-rw-r--r--python/pyspark/sql/utils.py2
1 files changed, 2 insertions, 0 deletions
diff --git a/python/pyspark/sql/utils.py b/python/pyspark/sql/utils.py
index cb172d21f3..36c93228b9 100644
--- a/python/pyspark/sql/utils.py
+++ b/python/pyspark/sql/utils.py
@@ -61,6 +61,8 @@ def capture_sql_exception(f):
e.java_exception.getStackTrace()))
if s.startswith('org.apache.spark.sql.AnalysisException: '):
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
+ if s.startswith('org.apache.spark.sql.catalyst.analysis.NoSuchTableException: '):
+ raise AnalysisException(s.split(': ', 1)[1], stackTrace)
if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
raise ParseException(s.split(': ', 1)[1], stackTrace)
if s.startswith('org.apache.spark.sql.ContinuousQueryException: '):