diff options
author | Shivaram Venkataraman <shivaram@cs.berkeley.edu> | 2015-05-29 14:11:58 -0700 |
---|---|---|
committer | Davies Liu <davies@databricks.com> | 2015-05-29 14:11:58 -0700 |
commit | 5f48e5c33bafa376be5741e260a037c66103fdcd (patch) | |
tree | 7517dc75467eb80a439dbf87573aeff572289d12 /docs/index.md | |
parent | 9eb222c13991c2b4a22db485710dc2e27ccf06dd (diff) | |
download | spark-5f48e5c33bafa376be5741e260a037c66103fdcd.tar.gz spark-5f48e5c33bafa376be5741e260a037c66103fdcd.tar.bz2 spark-5f48e5c33bafa376be5741e260a037c66103fdcd.zip |
[SPARK-6806] [SPARKR] [DOCS] Add a new SparkR programming guide
This PR adds a new SparkR programming guide at the top-level. This will be useful for R users as our APIs don't directly match the Scala/Python APIs and as we need to explain SparkR without using RDDs as examples etc.
cc rxin davies pwendell
cc cafreeman -- Would be great if you could also take a look at this !
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #6490 from shivaram/sparkr-guide and squashes the following commits:
d5ff360 [Shivaram Venkataraman] Add a section on HiveContext, HQL queries
408dce5 [Shivaram Venkataraman] Fix link
dbb86e3 [Shivaram Venkataraman] Fix minor typo
9aff5e0 [Shivaram Venkataraman] Address comments, use dplyr-like syntax in example
d09703c [Shivaram Venkataraman] Fix default argument in read.df
ea816a1 [Shivaram Venkataraman] Add a new SparkR programming guide Also update write.df, read.df to handle defaults better
Diffstat (limited to 'docs/index.md')
-rw-r--r-- | docs/index.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/index.md b/docs/index.md index 5ef6d983c4..fac071da81 100644 --- a/docs/index.md +++ b/docs/index.md @@ -54,7 +54,7 @@ Example applications are also provided in Python. For example, ./bin/spark-submit examples/src/main/python/pi.py 10 -Spark also provides an experimental R API since 1.4 (only DataFrames APIs included). +Spark also provides an experimental [R API](sparkr.html) since 1.4 (only DataFrames APIs included). To run Spark interactively in a R interpreter, use `bin/sparkR`: ./bin/sparkR --master local[2] |