From 9b5e460a9168ab78607034434ca45ab6cb51e5a6 Mon Sep 17 00:00:00 2001 From: Sunitha Kambhampati Date: Mon, 13 Feb 2017 22:49:29 -0800 Subject: [SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## What changes were proposed in this pull request? https://spark.apache.org/docs/latest/sql-programming-guide.html#caching-data-in-memory In the doc, the call spark.cacheTable(“tableName”) and spark.uncacheTable(“tableName”) actually needs to be spark.catalog.cacheTable and spark.catalog.uncacheTable ## How was this patch tested? Built the docs and verified the change shows up fine. Author: Sunitha Kambhampati Closes #16919 from skambha/docChange. --- docs/sql-programming-guide.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'docs') diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 9cf480caba..235f5ecc40 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -1272,9 +1272,9 @@ turning on some experimental options. ## Caching Data In Memory -Spark SQL can cache tables using an in-memory columnar format by calling `spark.cacheTable("tableName")` or `dataFrame.cache()`. +Spark SQL can cache tables using an in-memory columnar format by calling `spark.catalog.cacheTable("tableName")` or `dataFrame.cache()`. Then Spark SQL will scan only required columns and will automatically tune compression to minimize -memory usage and GC pressure. You can call `spark.uncacheTable("tableName")` to remove the table from memory. +memory usage and GC pressure. You can call `spark.catalog.uncacheTable("tableName")` to remove the table from memory. Configuration of in-memory caching can be done using the `setConf` method on `SparkSession` or by running `SET key=value` commands using SQL. -- cgit v1.2.3