From bddf1356708f42352934313c82f48dbce0056a68 Mon Sep 17 00:00:00 2001 From: Patrick Wendell Date: Tue, 10 Sep 2013 23:12:27 -0700 Subject: Change port from 3030 to 4040 --- docs/cluster-overview.md | 4 ++-- docs/configuration.md | 2 +- docs/hardware-provisioning.md | 4 ++-- docs/monitoring.md | 6 +++--- 4 files changed, 8 insertions(+), 8 deletions(-) (limited to 'docs') diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md index 7025c23657..f679cad713 100644 --- a/docs/cluster-overview.md +++ b/docs/cluster-overview.md @@ -59,8 +59,8 @@ and `addFile`. # Monitoring -Each driver program has a web UI, typically on port 3030, that displays information about running -tasks, executors, and storage usage. Simply go to `http://:3030` in a web browser to +Each driver program has a web UI, typically on port 4040, that displays information about running +tasks, executors, and storage usage. Simply go to `http://:4040` in a web browser to access this UI. The [monitoring guide](monitoring.html) also describes other monitoring options. # Job Scheduling diff --git a/docs/configuration.md b/docs/configuration.md index d4f85538b2..7940d41a27 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -111,7 +111,7 @@ Apart from these, the following properties are also available, and may be useful spark.ui.port - 3030 + 4040 Port for your application's dashboard, which shows memory and workload data diff --git a/docs/hardware-provisioning.md b/docs/hardware-provisioning.md index e5f054cb14..790220500a 100644 --- a/docs/hardware-provisioning.md +++ b/docs/hardware-provisioning.md @@ -43,7 +43,7 @@ rest for the operating system and buffer cache. How much memory you will need will depend on your application. To determine how much your application uses for a certain dataset size, load part of your dataset in a Spark RDD and use the -Storage tab of Spark's monitoring UI (`http://:3030`) to see its size in memory. +Storage tab of Spark's monitoring UI (`http://:4040`) to see its size in memory. Note that memory usage is greatly affected by storage level and serialization format -- see the [tuning guide](tuning.html) for tips on how to reduce it. @@ -59,7 +59,7 @@ In our experience, when the data is in memory, a lot of Spark applications are n Using a **10 Gigabit** or higher network is the best way to make these applications faster. This is especially true for "distributed reduce" applications such as group-bys, reduce-bys, and SQL joins. In any given application, you can see how much data Spark shuffles across the network -from the application's monitoring UI (`http://:3030`). +from the application's monitoring UI (`http://:4040`). # CPU Cores diff --git a/docs/monitoring.md b/docs/monitoring.md index 0e3606f71a..5f456b999b 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -7,7 +7,7 @@ There are several ways to monitor Spark applications. # Web Interfaces -Every SparkContext launches a web UI, by default on port 3030, that +Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes: * A list of scheduler stages and tasks @@ -15,9 +15,9 @@ displays useful information about the application. This includes: * Information about the running executors * Environmental information. -You can access this interface by simply opening `http://:3030` in a web browser. +You can access this interface by simply opening `http://:4040` in a web browser. If multiple SparkContexts are running on the same host, they will bind to succesive ports -beginning with 3030 (3031, 3032, etc). +beginning with 4040 (4041, 4042, etc). Spark's Standlone Mode cluster manager also has its own [web UI](spark-standalone.html#monitoring-and-logging). -- cgit v1.2.3