aboutsummaryrefslogtreecommitdiff
path: root/docs/sparkr.md
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2016-09-14 10:10:16 +0100
committerSean Owen <sowen@cloudera.com>2016-09-14 10:10:16 +0100
commitdc0a4c916151c795dc41b5714e9d23b4937f4636 (patch)
tree1ac43c7e2dafb07acc932d90ac050da6c81d414b /docs/sparkr.md
parent4cea9da2ae88b40a5503111f8f37051e2372163e (diff)
downloadspark-dc0a4c916151c795dc41b5714e9d23b4937f4636.tar.gz
spark-dc0a4c916151c795dc41b5714e9d23b4937f4636.tar.bz2
spark-dc0a4c916151c795dc41b5714e9d23b4937f4636.zip
[SPARK-17445][DOCS] Reference an ASF page as the main place to find third-party packages
## What changes were proposed in this pull request? Point references to spark-packages.org to https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects This will be accompanied by a parallel change to the spark-website repo, and additional changes to this wiki. ## How was this patch tested? Jenkins tests. Author: Sean Owen <sowen@cloudera.com> Closes #15075 from srowen/SPARK-17445.
Diffstat (limited to 'docs/sparkr.md')
-rw-r--r--docs/sparkr.md3
1 files changed, 2 insertions, 1 deletions
diff --git a/docs/sparkr.md b/docs/sparkr.md
index 4bbc362c52..b881119731 100644
--- a/docs/sparkr.md
+++ b/docs/sparkr.md
@@ -110,7 +110,8 @@ head(df)
SparkR supports operating on a variety of data sources through the `SparkDataFrame` interface. This section describes the general methods for loading and saving data using Data Sources. You can check the Spark SQL programming guide for more [specific options](sql-programming-guide.html#manually-specifying-options) that are available for the built-in data sources.
-The general method for creating SparkDataFrames from data sources is `read.df`. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically. SparkR supports reading JSON, CSV and Parquet files natively and through [Spark Packages](http://spark-packages.org/) you can find data source connectors for popular file formats like [Avro](http://spark-packages.org/package/databricks/spark-avro). These packages can either be added by
+The general method for creating SparkDataFrames from data sources is `read.df`. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically.
+SparkR supports reading JSON, CSV and Parquet files natively, and through packages available from sources like [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects), you can find data source connectors for popular file formats like Avro. These packages can either be added by
specifying `--packages` with `spark-submit` or `sparkR` commands, or if initializing SparkSession with `sparkPackages` parameter when in an interactive R shell or from RStudio.
<div data-lang="r" markdown="1">