aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2016-09-14 10:10:16 +0100
committerSean Owen <sowen@cloudera.com>2016-09-14 10:10:16 +0100
commitdc0a4c916151c795dc41b5714e9d23b4937f4636 (patch)
tree1ac43c7e2dafb07acc932d90ac050da6c81d414b /docs
parent4cea9da2ae88b40a5503111f8f37051e2372163e (diff)
downloadspark-dc0a4c916151c795dc41b5714e9d23b4937f4636.tar.gz
spark-dc0a4c916151c795dc41b5714e9d23b4937f4636.tar.bz2
spark-dc0a4c916151c795dc41b5714e9d23b4937f4636.zip
[SPARK-17445][DOCS] Reference an ASF page as the main place to find third-party packages
## What changes were proposed in this pull request? Point references to spark-packages.org to https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects This will be accompanied by a parallel change to the spark-website repo, and additional changes to this wiki. ## How was this patch tested? Jenkins tests. Author: Sean Owen <sowen@cloudera.com> Closes #15075 from srowen/SPARK-17445.
Diffstat (limited to 'docs')
-rwxr-xr-xdocs/_layouts/global.html2
-rw-r--r--docs/index.md2
-rw-r--r--docs/sparkr.md3
-rw-r--r--docs/streaming-programming-guide.md2
4 files changed, 5 insertions, 4 deletions
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index d3bf082aa7..ad5b5c9adf 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -114,7 +114,7 @@
<li class="divider"></li>
<li><a href="building-spark.html">Building Spark</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
- <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Supplemental+Spark+Projects">Supplemental Projects</a></li>
+ <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third Party Projects</a></li>
</ul>
</li>
</ul>
diff --git a/docs/index.md b/docs/index.md
index 0cb8803783..a7a92f6c4f 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -120,7 +120,7 @@ options for deployment:
* [OpenStack Swift](storage-openstack-swift.html)
* [Building Spark](building-spark.html): build Spark using the Maven system
* [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)
-* [Supplemental Projects](https://cwiki.apache.org/confluence/display/SPARK/Supplemental+Spark+Projects): related third party Spark projects
+* [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects): related third party Spark projects
**External Resources:**
diff --git a/docs/sparkr.md b/docs/sparkr.md
index 4bbc362c52..b881119731 100644
--- a/docs/sparkr.md
+++ b/docs/sparkr.md
@@ -110,7 +110,8 @@ head(df)
SparkR supports operating on a variety of data sources through the `SparkDataFrame` interface. This section describes the general methods for loading and saving data using Data Sources. You can check the Spark SQL programming guide for more [specific options](sql-programming-guide.html#manually-specifying-options) that are available for the built-in data sources.
-The general method for creating SparkDataFrames from data sources is `read.df`. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically. SparkR supports reading JSON, CSV and Parquet files natively and through [Spark Packages](http://spark-packages.org/) you can find data source connectors for popular file formats like [Avro](http://spark-packages.org/package/databricks/spark-avro). These packages can either be added by
+The general method for creating SparkDataFrames from data sources is `read.df`. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically.
+SparkR supports reading JSON, CSV and Parquet files natively, and through packages available from sources like [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects), you can find data source connectors for popular file formats like Avro. These packages can either be added by
specifying `--packages` with `spark-submit` or `sparkR` commands, or if initializing SparkSession with `sparkPackages` parameter when in an interactive R shell or from RStudio.
<div data-lang="r" markdown="1">
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 5392b4a9bc..43f1cf3e31 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -2382,7 +2382,7 @@ additional effort may be necessary to achieve exactly-once semantics. There are
- [Kafka Integration Guide](streaming-kafka-integration.html)
- [Kinesis Integration Guide](streaming-kinesis-integration.html)
- [Custom Receiver Guide](streaming-custom-receivers.html)
-* Third-party DStream data sources can be found in [Spark Packages](https://spark-packages.org/)
+* Third-party DStream data sources can be found in [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects)
* API documentation
- Scala docs
* [StreamingContext](api/scala/index.html#org.apache.spark.streaming.StreamingContext) and