aboutsummaryrefslogtreecommitdiff
path: root/docs/sql-programming-guide.md
diff options
context:
space:
mode:
authorYin Huai <yhuai@databricks.com>2015-08-27 16:11:25 -0700
committerYin Huai <yhuai@databricks.com>2015-08-27 16:11:25 -0700
commitb3dd569ad40905f8861a547a1e25ed3ca8e1d272 (patch)
tree484ae842e1511bf7bd80ca238bac26c539a84df0 /docs/sql-programming-guide.md
parent5bfe9e1111d9862084586549a7dc79476f67bab9 (diff)
downloadspark-b3dd569ad40905f8861a547a1e25ed3ca8e1d272.tar.gz
spark-b3dd569ad40905f8861a547a1e25ed3ca8e1d272.tar.bz2
spark-b3dd569ad40905f8861a547a1e25ed3ca8e1d272.zip
[SPARK-10287] [SQL] Fixes JSONRelation refreshing on read path
https://issues.apache.org/jira/browse/SPARK-10287 After porting json to HadoopFsRelation, it seems hard to keep the behavior of picking up new files automatically for JSON. This PR removes this behavior, so JSON is consistent with others (ORC and Parquet). Author: Yin Huai <yhuai@databricks.com> Closes #8469 from yhuai/jsonRefresh.
Diffstat (limited to 'docs/sql-programming-guide.md')
-rw-r--r--docs/sql-programming-guide.md6
1 files changed, 6 insertions, 0 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 99fec6c778..e8eb88488e 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -2057,6 +2057,12 @@ options.
- The canonical name of SQL/DataFrame functions are now lower case (e.g. sum vs SUM).
- It has been determined that using the DirectOutputCommitter when speculation is enabled is unsafe
and thus this output committer will not be used when speculation is on, independent of configuration.
+ - JSON data source will not automatically load new files that are created by other applications
+ (i.e. files that are not inserted to the dataset through Spark SQL).
+ For a JSON persistent table (i.e. the metadata of the table is stored in Hive Metastore),
+ users can use `REFRESH TABLE` SQL command or `HiveContext`'s `refreshTable` method
+ to include those new files to the table. For a DataFrame representing a JSON dataset, users need to recreate
+ the DataFrame and the new DataFrame will include new files.
## Upgrading from Spark SQL 1.3 to 1.4