| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DAG but it doesn't show
When uses clicks more than one time on any stage in the DAG graph on the *Job* web UI page, many new *Stage* web UI pages are opened, but only half of their DAG graphs are expanded.
After this PR's fix, every newly opened *Stage* page's DAG graph is expanded.
Before:
![](https://cloud.githubusercontent.com/assets/15843379/13279144/74808e86-db10-11e5-8514-cecf31af8908.png)
After:
![](https://cloud.githubusercontent.com/assets/15843379/13279145/77ca5dec-db10-11e5-9457-8e1985461328.png)
## What changes were proposed in this pull request?
- Removed the `expandDagViz` parameter for _Stage_ page and related codes
- Added a `onclick` function setting `expandDagVizArrowKey(false)` as `true`
## How was this patch tested?
Manual tests (with this fix) to verified this fix work:
- clicked many times on _Job_ Page's DAG Graph → each newly opened Stage page's DAG graph is expanded
Manual tests (with this fix) to verified this fix do not break features we already had:
- refreshed many times for a same _Stage_ page (whose DAG already expanded) → DAG remained expanded upon every refresh
- refreshed many times for a same _Stage_ page (whose DAG unexpanded) → DAG remained unexpanded upon every refresh
- refreshed many times for a same _Job_ page (whose DAG already expanded) → DAG remained expanded upon every refresh
- refreshed many times for a same _Job_ page (whose DAG unexpanded) → DAG remained unexpanded upon every refresh
Author: Liwei Lin <proflin.me@gmail.com>
Closes #11368 from proflin/SPARK-13468.
|
|
|
|
| |
This reverts commit 2e44031fafdb8cf486573b98e4faa6b31ffb90a4.
|
|
|
|
|
|
|
|
|
| |
Fixed the HTTP Server Host Name/IP issue i.e. HTTP Server to take the
configured host name/IP and not '0.0.0.0' always.
Author: Devaraj K <devaraj@apache.org>
Closes #11133 from devaraj-kavali/SPARK-13117.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
When we pass a Python function to JVM side, we also need to send its context, e.g. `envVars`, `pythonIncludes`, `pythonExec`, etc. However, it's annoying to pass around so many parameters at many places. This PR abstract python function along with its context, to simplify some pyspark code and make the logic more clear.
## How was the this patch tested?
by existing unit tests.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #11342 from cloud-fan/python-clean.
|
|
|
|
|
|
|
|
|
|
|
|
| |
for spark to start]
Added an exception to be thrown in UnifiedMemoryManager.scala if the configuration given for executor memory is too low. Also modified the exception message thrown when driver memory is too low.
This patch was tested manually by passing in config options to Spark shell. I also added a test in UnifiedMemoryManagerSuite.scala
Author: Daniel Jalova <djalova@us.ibm.com>
Closes #11255 from djalova/SPARK-12759.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Generates code for SortMergeJoin.
## How was the this patch tested?
Unit tests and manually tested with TPCDS Q72, which showed 70% performance improvements (from 42s to 25s), but micro benchmark only show minor improvements, it may depends the distribution of data and number of columns.
Author: Davies Liu <davies@databricks.com>
Closes #11248 from davies/gen_smj.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Executor Tab
andrewor14 squito Dead Executors should also be displayed on Executor Tab.
as following:
![image](https://cloud.githubusercontent.com/assets/545478/11492707/ae55d7f6-982b-11e5-919a-b62cd84684b2.png)
Author: Lianhui Wang <lianhuiwang09@gmail.com>
This patch had conflicts when merged, resolved by
Committer: Andrew Or <andrew@databricks.com>
Closes #10058 from lianhuiwang/SPARK-7729.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
History page now sorts the appID as a string, which can lead to unexpected order for the case "application_11111_9" and "application_11111_20".
Add a new sort type called appId-numeric can fix it.
## How was the this patch tested?
This patch was manually tested with UI. See the screenshot below:
![sortappidbetter](https://cloud.githubusercontent.com/assets/11683054/13185564/7f941a16-d707-11e5-8fb7-0316368d3030.png)
Author: zhuol <zhuol@yahoo-inc.com>
Closes #11259 from zhuoliu/13364.
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA: https://issues.apache.org/jira/browse/SPARK-13358
When trying to run a benchmark, I found that on my Ubuntu linux grep is not in /usr/bin/ but /bin/. So wondering if it is better to use which to retrieve grep path.
cc davies
Author: Liang-Chi Hsieh <viirya@gmail.com>
Closes #11231 from viirya/benchmark-grep-path.
|
|
|
|
|
|
| |
Author: jerryshao <sshao@hortonworks.com>
Closes #11229 from jerryshao/SPARK-13220.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
special character
## What changes were proposed in this pull request?
When there are some special characters (e.g., `"`, `\`) in `label`, DAG will be broken. This patch just escapes `label` to avoid DAG being broken by some special characters
## How was the this patch tested?
Jenkins tests
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #11309 from zsxwing/SPARK-13298.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
This patch removes SparkContext.metricsSystem. SparkContext.metricsSystem returns MetricsSystem, which is a private class. I think it was added by accident.
In addition, I also removed an unused private[spark] method schedulerBackend setter.
## How was the this patch tested?
N/A.
Author: Reynold Xin <rxin@databricks.com>
This patch had conflicts when merged, resolved by
Committer: Josh Rosen <joshrosen@databricks.com>
Closes #11282 from rxin/SPARK-13413.
|
|
|
|
|
|
|
|
|
|
| |
Currently the Mesos cluster dispatcher is not using offers from multiple roles correctly, as it simply aggregates all the offers resource values into one, but doesn't apply them correctly before calling the driver as Mesos needs the resources from the offers to be specified which role it originally belongs to. Multiple roles is already supported with fine/coarse grain scheduler, so porting that logic here to the cluster scheduler.
https://issues.apache.org/jira/browse/SPARK-10749
Author: Timothy Chen <tnachen@gmail.com>
Closes #8872 from tnachen/cluster_multi_roles.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in other comments
## What changes were proposed in this pull request?
This PR tries to fix all typos in all markdown files under `docs` module,
and fixes similar typos in other comments, too.
## How was the this patch tested?
manual tests.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #11300 from dongjoon-hyun/minor_fix_typos.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
This PR removes the support of SIMR, since SIMR is not actively used and maintained for a long time, also is not supported from `SparkSubmit`, so here propose to remove it.
## How was the this patch tested?
This patch is tested locally by running unit tests.
Author: jerryshao <sshao@hortonworks.com>
Closes #11296 from jerryshao/SPARK-13426.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
`JobWaiter.taskSucceeded` will be called for each task. When `resultHandler` throws an exception, `taskSucceeded` will also throw it for each task. DAGScheduler just catches it and reports it like this:
```Scala
try {
job.listener.taskSucceeded(rt.outputId, event.result)
} catch {
case e: Exception =>
// TODO: Perhaps we want to mark the resultStage as failed?
job.listener.jobFailed(new SparkDriverExecutionException(e))
}
```
Therefore `JobWaiter.jobFailed` may be called multiple times.
So `JobWaiter.jobFailed` should use `Promise.tryFailure` instead of `Promise.failure` because the latter one doesn't support calling multiple times.
## How was the this patch tested?
Jenkins tests.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #11280 from zsxwing/SPARK-13408.
|
|
|
|
|
|
|
|
|
|
| |
TaskMetrics.fromAccumulatorUpdates
`TaskMetrics.fromAccumulatorUpdates()` can fail if accumulators have been garbage-collected on the driver. To guard against this, this patch introduces `ListenerTaskMetrics`, a subclass of `TaskMetrics` which is used only in `TaskMetrics.fromAccumulatorUpdates()` and which eliminates the need to access the original accumulators on the driver.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #11276 from JoshRosen/accum-updates-fix.
|
|
|
|
|
|
|
|
|
|
|
|
| |
for reduce, fold
Clarify that reduce functions need to be commutative, and fold functions do not
See https://github.com/apache/spark/pull/11091
Author: Sean Owen <sowen@cloudera.com>
Closes #11217 from srowen/SPARK-13339.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Option and String directly.
## What changes were proposed in this pull request?
Fix some comparisons between unequal types that cause IJ warnings and in at least one case a likely bug (TaskSetManager)
## How was the this patch tested?
Running Jenkins tests
Author: Sean Owen <sowen@cloudera.com>
Closes #11253 from srowen/SPARK-13371.
|
|
|
|
|
|
|
|
| |
See [JIRA](https://issues.apache.org/jira/browse/SPARK-13344) for more detail. This was caused by #10835.
Author: Andrew Or <andrew@databricks.com>
Closes #11222 from andrewor14/fix-test-accum-exceptions.
|
|
|
|
|
|
|
|
|
| |
This commit removes an unnecessary duplicate check in addPendingTask that meant
that scheduling a task set took time proportional to (# tasks)^2.
Author: Sital Kedia <skedia@fb.com>
Closes #11175 from sitalkedia/fix_stuck_driver.
|
|
|
|
|
|
|
|
| |
See http://openjdk.java.net/jeps/223 for more information about the JDK 9 version string scheme.
Author: Claes Redestad <claes.redestad@gmail.com>
Closes #11160 from cl4es/master.
|
|
|
|
|
|
|
|
| |
Replace `getStackTraceString` with `Utils.exceptionString`
Author: Sean Owen <sowen@cloudera.com>
Closes #11182 from srowen/SPARK-13172.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to being on a Windows platform I have been unable to run the tests as described in the "Contributing to Spark" instructions. As the change is only to two lines of code in the Web UI, which I have manually built and tested, I am submitting this pull request anyway. I hope this is OK.
Is it worth considering also including this fix in any future 1.5.x releases (if any)?
I confirm this is my own original work and license it to the Spark project under its open source license.
Author: markpavey <mark.pavey@thefilter.com>
Closes #11135 from markpavey/JIRA_SPARK-13142_WindowsWebUILogFix.
|
|
|
|
|
|
|
|
| |
Overrode the start() method, which was previously starting a thread causing a race condition. I believe this should fix the flaky test.
Author: Michael Gummelt <mgummelt@mesosphere.io>
Closes #11164 from mgummelt/fix_mesos_tests.
|
|
|
|
|
|
|
|
| |
andrewor14 This addressed your style comments from #10993
Author: Michael Gummelt <mgummelt@mesosphere.io>
Closes #11187 from mgummelt/fix_mesos_style.
|
|
|
|
|
|
|
|
|
|
|
| |
This JIRA is related to
https://github.com/apache/spark/pull/5852
Had to do some minor rework and test to make sure it
works with current version of spark.
Author: Sanket <schintap@untilservice-lm>
Closes #10838 from redsanket/limit-outbound-connections.
|
|
|
|
|
|
|
|
|
|
|
| |
When the HistoryServer is showing an incomplete app, it needs to check if there is a newer version of the app available. It does this by checking if a version of the app has been loaded with a larger *filesize*. If so, it detaches the current UI, attaches the new one, and redirects back to the same URL to show the new UI.
https://issues.apache.org/jira/browse/SPARK-7889
Author: Steve Loughran <stevel@hortonworks.com>
Author: Imran Rashid <irashid@cloudera.com>
Closes #11118 from squito/SPARK-7889-alternate.
|
|
|
|
| |
This reverts commit 50fa6fd1b365d5db7e2b2c59624a365cef0d1696.
|
|
|
|
|
|
|
|
|
|
|
| |
This commit removes an unnecessary duplicate check in addPendingTask that meant
that scheduling a task set took time proportional to (# tasks)^2.
Author: Sital Kedia <skedia@fb.com>
Closes #11167 from sitalkedia/fix_stuck_driver and squashes the following commits:
3fe1af8 [Sital Kedia] [SPARK-13279] Remove unnecessary duplicate check in addPendingTask function
|
|
|
|
|
|
|
|
|
|
| |
DataTables
Made sure the old tables continue to use the old css and the new DataTables use the new css. Also fixed it so the Safari Web Inspector doesn't throw errors when on the new DataTables pages.
Author: Alex Bozarth <ajbozart@us.ibm.com>
Closes #11038 from ajbozarth/spark13124.
|
|
|
|
|
|
|
|
|
|
| |
The "getPersistentRDDs()" is a useful API of SparkContext to get cached RDDs. However, the JavaSparkContext does not have this API.
Add a simple getPersistentRDDs() to get java.util.Map<Integer, JavaRDD> for Java users.
Author: Junyang <fly.shenjy@gmail.com>
Closes #10978 from flyjy/master.
|
|
|
|
|
|
|
|
|
|
| |
Remove spark.closure.serializer option and use JavaSerializer always
CC andrewor14 rxin I see there's a discussion in the JIRA but just thought I'd offer this for a look at what the change would be.
Author: Sean Owen <sowen@cloudera.com>
Closes #11150 from srowen/SPARK-12414.
|
|
|
|
|
|
|
|
| |
The right margin of the history page is little bit off. A simple fix for that issue.
Author: zhuol <zhuol@yahoo-inc.com>
Closes #11029 from zhuoliu/13126.
|
|
|
|
|
|
|
|
|
|
| |
getting set correctly
The column width for the new DataTables now adjusts for the current page rather than being hard-coded for the entire table's data.
Author: Alex Bozarth <ajbozart@us.ibm.com>
Closes #11057 from ajbozarth/spark13163.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
grained mesos mode.
This is the next iteration of tnachen's previous PR: https://github.com/apache/spark/pull/4027
In that PR, we resolved with andrewor14 and pwendell to implement the Mesos scheduler's support of `spark.executor.cores` to be consistent with YARN and Standalone. This PR implements that resolution.
This PR implements two high-level features. These two features are co-dependent, so they're implemented both here:
- Mesos support for spark.executor.cores
- Multiple executors per slave
We at Mesosphere have been working with Typesafe on a Spark/Mesos integration test suite: https://github.com/typesafehub/mesos-spark-integration-tests, which passes for this PR.
The contribution is my original work and I license the work to the project under the project's open source license.
Author: Michael Gummelt <mgummelt@mesosphere.io>
Closes #10993 from mgummelt/executor_sizing.
|
|
|
|
|
|
|
|
| |
Make Logging private[spark]. Pretty much all there is to it.
Author: Sean Owen <sowen@cloudera.com>
Closes #11103 from srowen/SPARK-9307.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR improve the lookup of BytesToBytesMap by:
1. Generate code for calculate the hash code of grouping keys.
2. Do not use MemoryLocation, fetch the baseObject and offset for key and value directly (remove the indirection).
Author: Davies Liu <davies@databricks.com>
Closes #11010 from davies/gen_map.
|
|
|
|
|
|
|
|
|
|
|
|
| |
ShuffleBlockFetcherIterator
Call shuffleMetrics's incRemoteBytesRead and incRemoteBlocksFetched when polling FetchResult from `results` so as to always use shuffleMetrics in one thread.
Also fix a race condition that could cause memory leak.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #11138 from zsxwing/SPARK-13245.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds the benchmark results as comments.
The codegen version is slower than the interpreted version for `simple` case becasue of 3 reasons:
1. codegen version use a more complex hash algorithm than interpreted version, i.e. `Murmur3_x86_32.hashInt` vs [simple multiplication and addition](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/rows.scala#L153).
2. codegen version will write the hash value to a row first and then read it out. I tried to create a `GenerateHasher` that can generate code to return hash value directly and got about 60% speed up for the `simple` case, does it worth?
3. the row in `simple` case only has one int field, so the runtime reflection may be removed because of branch prediction, which makes the interpreted version faster.
The `array` case is also slow for similar reasons, e.g. array elements are of same type, so interpreted version can probably get rid of runtime reflection by branch prediction.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #10917 from cloud-fan/hash-benchmark.
|
|
|
|
|
|
|
|
| |
Since Spark requires at least JRE 1.7, it is safe to use built-in java.nio.Files.
Author: Jakob Odersky <jakob@odersky.com>
Closes #11098 from jodersky/SPARK-13176.
|
|
|
|
|
|
|
|
| |
Additional changes to #10835, mainly related to style and visibility. This patch also adds back a few deprecated methods for backward compatibility.
Author: Andrew Or <andrew@databricks.com>
Closes #10958 from andrewor14/task-metrics-to-accums-followups.
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a bug when we try to grow the buffer, OOM is ignore wrongly (the assert also skipped by JVM), then we try grow the array again, this one will trigger spilling free the current page, the current record we inserted will be invalid.
The root cause is that JVM has less free memory than MemoryManager thought, it will OOM when allocate a page without trigger spilling. We should catch the OOM, and acquire memory again to trigger spilling.
And also, we could not grow the array in `insertRecord` of `InMemorySorter` (it was there just for easy testing).
Author: Davies Liu <davies@databricks.com>
Closes #11095 from davies/fix_expand.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
structures
rxin srowen
I work out note message for rdd.take function, please help to review.
If it's fine, I can apply to all other function later.
Author: Tommy YU <tummyyu@163.com>
Closes #10874 from Wenpei/spark-5865-add-warning-for-localdatastructure.
|
| |
|
|
|
|
|
|
|
|
|
| |
Trivial search-and-replace to eliminate deprecation warnings in Scala 2.11.
Also works with 2.10
Author: Jakob Odersky <jakob@odersky.com>
Closes #11085 from jodersky/SPARK-13171.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for [SPARK-13002](https://issues.apache.org/jira/browse/SPARK-13002) about the initial number of executors when running with dynamic allocation on Mesos.
Instead of fixing it just for the Mesos case, made the change in `ExecutorAllocationManager`. It is already driving the number of executors running on Mesos, only no the initial value.
The `None` and `Some(0)` are internal details on the computation of resources to reserved, in the Mesos backend scheduler. `executorLimitOption` has to be initialized correctly, otherwise the Mesos backend scheduler will, either, create to many executors at launch, or not create any executors and not be able to recover from this state.
Removed the 'special case' description in the doc. It was not totally accurate, and is not needed anymore.
This doesn't fix the same problem visible with Spark standalone. There is no straightforward way to send the initial value in standalone mode.
Somebody knowing this part of the yarn support should review this change.
Author: Luc Bourlier <luc.bourlier@typesafe.com>
Closes #11047 from skyluc/issue/initial-dyn-alloc-2.
|
|
|
|
|
|
|
|
| |
Another trivial deprecation fix for Scala 2.11
Author: Jakob Odersky <jakob@odersky.com>
Closes #11089 from jodersky/SPARK-13208.
|
|
|
|
|
|
|
|
| |
in the WAITING state
Author: Raafat Akkad <raafat.akkad@gmail.com>
Closes #10959 from RaafatAkkad/master.
|
| |
|