aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
Diffstat (limited to 'sql')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala18
1 files changed, 18 insertions, 0 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
index c92ca607fb..729cfc1da2 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
@@ -1192,6 +1192,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
@@ -1208,6 +1211,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
@@ -1232,6 +1238,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
@@ -1248,6 +1257,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
@@ -1264,6 +1276,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
@@ -1285,6 +1300,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
+ *
+ * Also note that while this function can persist the table metadata into Hive's metastore,
+ * the table will NOT be accessible from Hive.
* @group output
*/
@Experimental