aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/building-spark.md6
-rw-r--r--docs/running-on-mesos.md4
2 files changed, 5 insertions, 5 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index adf798847c..2c6294133e 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -35,12 +35,12 @@ to the `sharedSettings` val. See also [this PR](https://github.com/apache/spark/
To create a Spark distribution like those distributed by the
[Spark Downloads](http://spark.apache.org/downloads.html) page, and that is laid out so as
-to be runnable, use `make-distribution.sh` in the project root directory. It can be configured
+to be runnable, use `./dev/make-distribution.sh` in the project root directory. It can be configured
with Maven profile settings and so on like the direct Maven build. Example:
- ./make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
+ ./dev/make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
-For more information on usage, run `./make-distribution.sh --help`
+For more information on usage, run `./dev/make-distribution.sh --help`
# Setting up Maven's Memory Usage
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 9816d030e9..912a010812 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -98,10 +98,10 @@ To host on HDFS, use the Hadoop fs put command: `hadoop fs -put spark-{{site.SPA
Or if you are using a custom-compiled version of Spark, you will need to create a package using
-the `make-distribution.sh` script included in a Spark source tarball/checkout.
+the `dev/make-distribution.sh` script included in a Spark source tarball/checkout.
1. Download and build Spark using the instructions [here](index.html)
-2. Create a binary package using `make-distribution.sh --tgz`.
+2. Create a binary package using `./dev/make-distribution.sh --tgz`.
3. Upload archive to http/s3/hdfs