aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rwxr-xr-xdocs/_layouts/global.html4
-rw-r--r--docs/building-spark.md4
-rw-r--r--docs/contributing-to-spark.md2
-rw-r--r--docs/index.md4
-rw-r--r--docs/sparkr.md2
-rw-r--r--docs/streaming-programming-guide.md2
6 files changed, 9 insertions, 9 deletions
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index ad5b5c9adf..c00d0db63c 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -113,8 +113,8 @@
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li class="divider"></li>
<li><a href="building-spark.html">Building Spark</a></li>
- <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
- <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third Party Projects</a></li>
+ <li><a href="http://spark.apache.org/contributing.html">Contributing to Spark</a></li>
+ <li><a href="http://spark.apache.org/third-party-projects.html">Third Party Projects</a></li>
</ul>
</li>
</ul>
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 88da0cc9c3..65c2895b29 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -197,7 +197,7 @@ can be set to control the SBT build. For example:
To avoid the overhead of launching sbt each time you need to re-compile, you can launch sbt
in interactive mode by running `build/sbt`, and then run all build commands at the command
prompt. For more recommendations on reducing build time, refer to the
-[wiki page](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-ReducingBuildTimes).
+[Useful Developer Tools page](http://spark.apache.org/developer-tools.html).
## Encrypted Filesystems
@@ -215,7 +215,7 @@ to the `sharedSettings` val. See also [this PR](https://github.com/apache/spark/
## IntelliJ IDEA or Eclipse
For help in setting up IntelliJ IDEA or Eclipse for Spark development, and troubleshooting, refer to the
-[wiki page for IDE setup](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-IDESetup).
+[Useful Developer Tools page](http://spark.apache.org/developer-tools.html).
# Running Tests
diff --git a/docs/contributing-to-spark.md b/docs/contributing-to-spark.md
index ef1b3ad6da..9252545e4a 100644
--- a/docs/contributing-to-spark.md
+++ b/docs/contributing-to-spark.md
@@ -5,4 +5,4 @@ title: Contributing to Spark
The Spark team welcomes all forms of contributions, including bug reports, documentation or patches.
For the newest information on how to contribute to the project, please read the
-[wiki page on contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark).
+[Contributing to Spark guide](http://spark.apache.org/contributing.html).
diff --git a/docs/index.md b/docs/index.md
index 39de11de85..c5d34cb5c4 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -125,8 +125,8 @@ options for deployment:
* Integration with other storage systems:
* [OpenStack Swift](storage-openstack-swift.html)
* [Building Spark](building-spark.html): build Spark using the Maven system
-* [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)
-* [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects): related third party Spark projects
+* [Contributing to Spark](http://spark.apache.org/contributing.html)
+* [Third Party Projects](http://spark.apache.org/third-party-projects.html): related third party Spark projects
**External Resources:**
diff --git a/docs/sparkr.md b/docs/sparkr.md
index f30bd4026f..d26949226b 100644
--- a/docs/sparkr.md
+++ b/docs/sparkr.md
@@ -126,7 +126,7 @@ head(df)
SparkR supports operating on a variety of data sources through the `SparkDataFrame` interface. This section describes the general methods for loading and saving data using Data Sources. You can check the Spark SQL programming guide for more [specific options](sql-programming-guide.html#manually-specifying-options) that are available for the built-in data sources.
The general method for creating SparkDataFrames from data sources is `read.df`. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically.
-SparkR supports reading JSON, CSV and Parquet files natively, and through packages available from sources like [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects), you can find data source connectors for popular file formats like Avro. These packages can either be added by
+SparkR supports reading JSON, CSV and Parquet files natively, and through packages available from sources like [Third Party Projects](http://spark.apache.org/third-party-projects.html), you can find data source connectors for popular file formats like Avro. These packages can either be added by
specifying `--packages` with `spark-submit` or `sparkR` commands, or if initializing SparkSession with `sparkPackages` parameter when in an interactive R shell or from RStudio.
<div data-lang="r" markdown="1">
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 18fc1cd934..1fcd198685 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -2382,7 +2382,7 @@ additional effort may be necessary to achieve exactly-once semantics. There are
- [Kafka Integration Guide](streaming-kafka-integration.html)
- [Kinesis Integration Guide](streaming-kinesis-integration.html)
- [Custom Receiver Guide](streaming-custom-receivers.html)
-* Third-party DStream data sources can be found in [Third Party Projects](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects)
+* Third-party DStream data sources can be found in [Third Party Projects](http://spark.apache.org/third-party-projects.html)
* API documentation
- Scala docs
* [StreamingContext](api/scala/index.html#org.apache.spark.streaming.StreamingContext) and