aboutsummaryrefslogtreecommitdiff
path: root/R/README.md
diff options
context:
space:
mode:
authorDavies Liu <davies@databricks.com>2015-05-23 00:00:30 -0700
committerShivaram Venkataraman <shivaram@cs.berkeley.edu>2015-05-23 00:01:40 -0700
commit7af3818c6b2bf35bfa531ab7cc3a4a714385015e (patch)
treee7dcb33da71845eaed6808045725882d6ba07796 /R/README.md
parent4583cf4be17155c68178155acf6866d7cc8f7df0 (diff)
downloadspark-7af3818c6b2bf35bfa531ab7cc3a4a714385015e.tar.gz
spark-7af3818c6b2bf35bfa531ab7cc3a4a714385015e.tar.bz2
spark-7af3818c6b2bf35bfa531ab7cc3a4a714385015e.zip
[SPARK-6806] [SPARKR] [DOCS] Fill in SparkR examples in programming guide
sqlCtx -> sqlContext You can check the docs by: ``` $ cd docs $ SKIP_SCALADOC=1 jekyll serve ``` cc shivaram Author: Davies Liu <davies@databricks.com> Closes #5442 from davies/r_docs and squashes the following commits: 7a12ec6 [Davies Liu] remove rdd in R docs 8496b26 [Davies Liu] remove the docs related to RDD e23b9d6 [Davies Liu] delete R docs for RDD API 222e4ff [Davies Liu] Merge branch 'master' into r_docs 89684ce [Davies Liu] Merge branch 'r_docs' of github.com:davies/spark into r_docs f0a10e1 [Davies Liu] address comments from @shivaram f61de71 [Davies Liu] Update pairRDD.R 3ef7cf3 [Davies Liu] use + instead of function(a,b) a+b 2f10a77 [Davies Liu] address comments from @cafreeman 9c2a062 [Davies Liu] mention R api together with Python API 23f751a [Davies Liu] Fill in SparkR examples in programming guide
Diffstat (limited to 'R/README.md')
-rw-r--r--R/README.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/R/README.md b/R/README.md
index a6970e39b5..d7d65b4f0e 100644
--- a/R/README.md
+++ b/R/README.md
@@ -52,7 +52,7 @@ The SparkR documentation (Rd files and HTML files) are not a part of the source
SparkR comes with several sample programs in the `examples/src/main/r` directory.
To run one of them, use `./bin/sparkR <filename> <args>`. For example:
- ./bin/sparkR examples/src/main/r/pi.R local[2]
+ ./bin/sparkR examples/src/main/r/dataframe.R
You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):
@@ -63,5 +63,5 @@ You can also run the unit-tests for SparkR by running (you need to install the [
The `./bin/spark-submit` and `./bin/sparkR` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
```
export YARN_CONF_DIR=/etc/hadoop/conf
-./bin/spark-submit --master yarn examples/src/main/r/pi.R 4
+./bin/spark-submit --master yarn examples/src/main/r/dataframe.R
```