aboutsummaryrefslogtreecommitdiff
path: root/docs/sparkr.md
diff options
context:
space:
mode:
authorFelix Cheung <felixcheung_m@hotmail.com>2016-06-21 13:56:37 +0800
committerCheng Lian <lian@databricks.com>2016-06-21 13:56:37 +0800
commit58f6e27dd70f476f99ac8204e6b405bced4d6de1 (patch)
tree7a287e4fde63270827710211bc3628179fa56d4c /docs/sparkr.md
parent07367533de68817e1e6cf9cf2b056a04dd160c8a (diff)
downloadspark-58f6e27dd70f476f99ac8204e6b405bced4d6de1.tar.gz
spark-58f6e27dd70f476f99ac8204e6b405bced4d6de1.tar.bz2
spark-58f6e27dd70f476f99ac8204e6b405bced4d6de1.zip
[SPARK-15863][SQL][DOC][SPARKR] sql programming guide updates to include sparkSession in R
## What changes were proposed in this pull request? Update doc as per discussion in PR #13592 ## How was this patch tested? manual shivaram liancheng Author: Felix Cheung <felixcheung_m@hotmail.com> Closes #13799 from felixcheung/rsqlprogrammingguide.
Diffstat (limited to 'docs/sparkr.md')
-rw-r--r--docs/sparkr.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/sparkr.md b/docs/sparkr.md
index 023bbcd39c..f0189012f3 100644
--- a/docs/sparkr.md
+++ b/docs/sparkr.md
@@ -152,7 +152,7 @@ write.df(people, path="people.parquet", source="parquet", mode="overwrite")
### From Hive tables
-You can also create SparkDataFrames from Hive tables. To do this we will need to create a SparkSession with Hive support which can access tables in the Hive MetaStore. Note that Spark should have been built with [Hive support](building-spark.html#building-with-hive-and-jdbc-support) and more details can be found in the [SQL programming guide](sql-programming-guide.html#starting-point-sqlcontext). In SparkR, by default it will attempt to create a SparkSession with Hive support enabled (`enableHiveSupport = TRUE`).
+You can also create SparkDataFrames from Hive tables. To do this we will need to create a SparkSession with Hive support which can access tables in the Hive MetaStore. Note that Spark should have been built with [Hive support](building-spark.html#building-with-hive-and-jdbc-support) and more details can be found in the [SQL programming guide](sql-programming-guide.html#starting-point-sparksession). In SparkR, by default it will attempt to create a SparkSession with Hive support enabled (`enableHiveSupport = TRUE`).
<div data-lang="r" markdown="1">
{% highlight r %}