diff options
author | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-08-23 23:30:17 -0700 |
---|---|---|
committer | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-08-29 21:19:04 -0700 |
commit | 53cd50c0699efc8733518658100c62426b425de2 (patch) | |
tree | 334e1924a46f7faafe680f46d910ce3e6ac5edc6 /docs/spark-standalone.md | |
parent | abdbacf2521ec40ee03ecc8e1aae8823013f24f1 (diff) | |
download | spark-53cd50c0699efc8733518658100c62426b425de2.tar.gz spark-53cd50c0699efc8733518658100c62426b425de2.tar.bz2 spark-53cd50c0699efc8733518658100c62426b425de2.zip |
Change build and run instructions to use assemblies
This commit makes Spark invocation saner by using an assembly JAR to
find all of Spark's dependencies instead of adding all the JARs in
lib_managed. It also packages the examples into an assembly and uses
that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script
with two better-named scripts: "run-examples" for examples, and
"spark-class" for Spark internal classes (e.g. REPL, master, etc). This
is also designed to minimize the confusion people have in trying to use
"run" to run their own classes; it's not meant to do that, but now at
least if they look at it, they can modify run-examples to do a decent
job for them.
As part of this, Bagel's examples are also now properly moved to the
examples package instead of bagel.
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r-- | docs/spark-standalone.md | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 7463844a4e..bb8be276c5 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -20,7 +20,7 @@ Compile Spark with `sbt package` as described in the [Getting Started Guide](ind You can start a standalone master server by executing: - ./run spark.deploy.master.Master + ./spark-class spark.deploy.master.Master Once started, the master will print out a `spark://IP:PORT` URL for itself, which you can use to connect workers to it, or pass as the "master" argument to `SparkContext` to connect a job to the cluster. You can also find this URL on @@ -28,7 +28,7 @@ the master's web UI, which is [http://localhost:8080](http://localhost:8080) by Similarly, you can start one or more workers and connect them to the master via: - ./run spark.deploy.worker.Worker spark://IP:PORT + ./spark-class spark.deploy.worker.Worker spark://IP:PORT Once you have started a worker, look at the master's web UI ([http://localhost:8080](http://localhost:8080) by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS). |