aboutsummaryrefslogtreecommitdiff
path: root/core/src
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-13278][CORE] Launcher fails to start with JDK 9 EAClaes Redestad2016-02-141-2/+4
| | | | | | | | See http://openjdk.java.net/jeps/223 for more information about the JDK 9 version string scheme. Author: Claes Redestad <claes.redestad@gmail.com> Closes #11160 from cl4es/master.
* [SPARK-13172][CORE][SQL] Stop using RichException.getStackTrace it is deprecatedSean Owen2016-02-133-6/+6
| | | | | | | | Replace `getStackTraceString` with `Utils.exceptionString` Author: Sean Owen <sowen@cloudera.com> Closes #11182 from srowen/SPARK-13172.
* [SPARK-13142][WEB UI] Problem accessing Web UI /logPage/ on Microsoft Windowsmarkpavey2016-02-131-2/+2
| | | | | | | | | | | | Due to being on a Windows platform I have been unable to run the tests as described in the "Contributing to Spark" instructions. As the change is only to two lines of code in the Web UI, which I have manually built and tested, I am submitting this pull request anyway. I hope this is OK. Is it worth considering also including this fix in any future 1.5.x releases (if any)? I confirm this is my own original work and license it to the Spark project under its open source license. Author: markpavey <mark.pavey@thefilter.com> Closes #11135 from markpavey/JIRA_SPARK-13142_WindowsWebUILogFix.
* [SPARK-5095] remove flaky testMichael Gummelt2016-02-121-0/+5
| | | | | | | | Overrode the start() method, which was previously starting a thread causing a race condition. I believe this should fix the flaky test. Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #11164 from mgummelt/fix_mesos_tests.
* [SPARK-5095] Fix style in mesos coarse grained scheduler codeMichael Gummelt2016-02-122-10/+12
| | | | | | | | andrewor14 This addressed your style comments from #10993 Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #11187 from mgummelt/fix_mesos_style.
* [SPARK-6166] Limit number of in flight outbound requestsSanket2016-02-114-15/+39
| | | | | | | | | | | This JIRA is related to https://github.com/apache/spark/pull/5852 Had to do some minor rework and test to make sure it works with current version of spark. Author: Sanket <schintap@untilservice-lm> Closes #10838 from redsanket/limit-outbound-connections.
* [SPARK-7889][WEBUI] HistoryServer updates UI for incomplete appsSteve Loughran2016-02-118-59/+1596
| | | | | | | | | | | When the HistoryServer is showing an incomplete app, it needs to check if there is a newer version of the app available. It does this by checking if a version of the app has been loaded with a larger *filesize*. If so, it detaches the current UI, attaches the new one, and redirects back to the same URL to show the new UI. https://issues.apache.org/jira/browse/SPARK-7889 Author: Steve Loughran <stevel@hortonworks.com> Author: Imran Rashid <irashid@cloudera.com> Closes #11118 from squito/SPARK-7889-alternate.
* Revert "[SPARK-13279] Remove O(n^2) operation from scheduler."Reynold Xin2016-02-111-9/+6
| | | | This reverts commit 50fa6fd1b365d5db7e2b2c59624a365cef0d1696.
* [SPARK-13279] Remove O(n^2) operation from scheduler.Sital Kedia2016-02-111-6/+9
| | | | | | | | | | | This commit removes an unnecessary duplicate check in addPendingTask that meant that scheduling a task set took time proportional to (# tasks)^2. Author: Sital Kedia <skedia@fb.com> Closes #11167 from sitalkedia/fix_stuck_driver and squashes the following commits: 3fe1af8 [Sital Kedia] [SPARK-13279] Remove unnecessary duplicate check in addPendingTask function
* [SPARK-13124][WEB UI] Fixed CSS and JS issues caused by addition of JQuery ↵Alex Bozarth2016-02-113-14/+20
| | | | | | | | | | DataTables Made sure the old tables continue to use the old css and the new DataTables use the new css. Also fixed it so the Safari Web Inspector doesn't throw errors when on the new DataTables pages. Author: Alex Bozarth <ajbozart@us.ibm.com> Closes #11038 from ajbozarth/spark13124.
* [SPARK-13074][CORE] Add JavaSparkContext. getPersistentRDDs methodJunyang2016-02-112-0/+22
| | | | | | | | | | The "getPersistentRDDs()" is a useful API of SparkContext to get cached RDDs. However, the JavaSparkContext does not have this API. Add a simple getPersistentRDDs() to get java.util.Map<Integer, JavaRDD> for Java users. Author: Junyang <fly.shenjy@gmail.com> Closes #10978 from flyjy/master.
* [SPARK-12414][CORE] Remove closure serializerSean Owen2016-02-102-5/+3
| | | | | | | | | | Remove spark.closure.serializer option and use JavaSerializer always CC andrewor14 rxin I see there's a discussion in the JIRA but just thought I'd offer this for a look at what the change would be. Author: Sean Owen <sowen@cloudera.com> Closes #11150 from srowen/SPARK-12414.
* [SPARK-13126] fix the right margin of history page.zhuol2016-02-101-1/+1
| | | | | | | | The right margin of the history page is little bit off. A simple fix for that issue. Author: zhuol <zhuol@yahoo-inc.com> Closes #11029 from zhuoliu/13126.
* [SPARK-13163][WEB UI] Column width on new History Server DataTables not ↵Alex Bozarth2016-02-101-0/+1
| | | | | | | | | | getting set correctly The column width for the new DataTables now adjusts for the current page rather than being hard-coded for the entire table's data. Author: Alex Bozarth <ajbozart@us.ibm.com> Closes #11057 from ajbozarth/spark13163.
* [SPARK-5095][MESOS] Support launching multiple mesos executors in coarse ↵Michael Gummelt2016-02-107-267/+506
| | | | | | | | | | | | | | | | | | | | grained mesos mode. This is the next iteration of tnachen's previous PR: https://github.com/apache/spark/pull/4027 In that PR, we resolved with andrewor14 and pwendell to implement the Mesos scheduler's support of `spark.executor.cores` to be consistent with YARN and Standalone. This PR implements that resolution. This PR implements two high-level features. These two features are co-dependent, so they're implemented both here: - Mesos support for spark.executor.cores - Multiple executors per slave We at Mesosphere have been working with Typesafe on a Spark/Mesos integration test suite: https://github.com/typesafehub/mesos-spark-integration-tests, which passes for this PR. The contribution is my original work and I license the work to the project under the project's open source license. Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #10993 from mgummelt/executor_sizing.
* [SPARK-9307][CORE][SPARK] Logging: Make it either stable or privateSean Owen2016-02-101-6/+2
| | | | | | | | Make Logging private[spark]. Pretty much all there is to it. Author: Sean Owen <sowen@cloudera.com> Closes #11103 from srowen/SPARK-9307.
* [SPARK-12950] [SQL] Improve lookup of BytesToBytesMap in aggregateDavies Liu2016-02-092-76/+96
| | | | | | | | | | | | This PR improve the lookup of BytesToBytesMap by: 1. Generate code for calculate the hash code of grouping keys. 2. Do not use MemoryLocation, fetch the baseObject and offset for key and value directly (remove the indirection). Author: Davies Liu <davies@databricks.com> Closes #11010 from davies/gen_map.
* [SPARK-13245][CORE] Call shuffleMetrics methods only in one thread for ↵Shixiong Zhu2016-02-091-11/+27
| | | | | | | | | | | | ShuffleBlockFetcherIterator Call shuffleMetrics's incRemoteBytesRead and incRemoteBlocksFetched when polling FetchResult from `results` so as to always use shuffleMetrics in one thread. Also fix a race condition that could cause memory leak. Author: Shixiong Zhu <shixiong@databricks.com> Closes #11138 from zsxwing/SPARK-13245.
* [SPARK-12888] [SQL] [FOLLOW-UP] benchmark the new hash expressionWenchen Fan2016-02-091-2/+2
| | | | | | | | | | | | | | | | Adds the benchmark results as comments. The codegen version is slower than the interpreted version for `simple` case becasue of 3 reasons: 1. codegen version use a more complex hash algorithm than interpreted version, i.e. `Murmur3_x86_32.hashInt` vs [simple multiplication and addition](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/rows.scala#L153). 2. codegen version will write the hash value to a row first and then read it out. I tried to create a `GenerateHasher` that can generate code to return hash value directly and got about 60% speed up for the `simple` case, does it worth? 3. the row in `simple` case only has one int field, so the runtime reflection may be removed because of branch prediction, which makes the interpreted version faster. The `array` case is also slow for similar reasons, e.g. array elements are of same type, so interpreted version can probably get rid of runtime reflection by branch prediction. Author: Wenchen Fan <wenchen@databricks.com> Closes #10917 from cloud-fan/hash-benchmark.
* [SPARK-13176][CORE] Use native file linking instead of external process lnJakob Odersky2016-02-091-19/+8
| | | | | | | | Since Spark requires at least JRE 1.7, it is safe to use built-in java.nio.Files. Author: Jakob Odersky <jakob@odersky.com> Closes #11098 from jodersky/SPARK-13176.
* [SPARK-10620][SPARK-13054] Minor addendum to #10835Andrew Or2016-02-0816-48/+64
| | | | | | | | Additional changes to #10835, mainly related to style and visibility. This patch also adds back a few deprecated methods for backward compatibility. Author: Andrew Or <andrew@databricks.com> Closes #10958 from andrewor14/task-metrics-to-accums-followups.
* [SPARK-13210][SQL] catch OOM when allocate memory and expand arrayDavies Liu2016-02-087-21/+35
| | | | | | | | | | | | There is a bug when we try to grow the buffer, OOM is ignore wrongly (the assert also skipped by JVM), then we try grow the array again, this one will trigger spilling free the current page, the current record we inserted will be invalid. The root cause is that JVM has less free memory than MemoryManager thought, it will OOM when allocate a page without trigger spilling. We should catch the OOM, and acquire memory again to trigger spilling. And also, we could not grow the array in `insertRecord` of `InMemorySorter` (it was there just for easy testing). Author: Davies Liu <davies@databricks.com> Closes #11095 from davies/fix_expand.
* [SPARK-5865][API DOC] Add doc warnings for methods that return local data ↵Tommy YU2016-02-064-0/+45
| | | | | | | | | | | | | structures rxin srowen I work out note message for rdd.take function, please help to review. If it's fine, I can apply to all other function later. Author: Tommy YU <tummyyu@163.com> Closes #10874 from Wenpei/spark-5865-add-warning-for-localdatastructure.
* [HOTFIX] fix float part of avgRateDavies Liu2016-02-051-1/+1
|
* [SPARK-13171][CORE] Replace future calls with FutureJakob Odersky2016-02-054-17/+17
| | | | | | | | | Trivial search-and-replace to eliminate deprecation warnings in Scala 2.11. Also works with 2.10 Author: Jakob Odersky <jakob@odersky.com> Closes #11085 from jodersky/SPARK-13171.
* [SPARK-13002][MESOS] Send initial request of executors for dyn allocationLuc Bourlier2016-02-051-3/+10
| | | | | | | | | | | | | | | | | Fix for [SPARK-13002](https://issues.apache.org/jira/browse/SPARK-13002) about the initial number of executors when running with dynamic allocation on Mesos. Instead of fixing it just for the Mesos case, made the change in `ExecutorAllocationManager`. It is already driving the number of executors running on Mesos, only no the initial value. The `None` and `Some(0)` are internal details on the computation of resources to reserved, in the Mesos backend scheduler. `executorLimitOption` has to be initialized correctly, otherwise the Mesos backend scheduler will, either, create to many executors at launch, or not create any executors and not be able to recover from this state. Removed the 'special case' description in the doc. It was not totally accurate, and is not needed anymore. This doesn't fix the same problem visible with Spark standalone. There is no straightforward way to send the initial value in standalone mode. Somebody knowing this part of the yarn support should review this change. Author: Luc Bourlier <luc.bourlier@typesafe.com> Closes #11047 from skyluc/issue/initial-dyn-alloc-2.
* [SPARK-13208][CORE] Replace use of Pairs with Tuple2sJakob Odersky2016-02-042-3/+3
| | | | | | | | Another trivial deprecation fix for Scala 2.11 Author: Jakob Odersky <jakob@odersky.com> Closes #11089 from jodersky/SPARK-13208.
* [SPARK-13052] waitingApps metric doesn't show the number of apps currently ↵Raafat Akkad2016-02-042-2/+2
| | | | | | | | in the WAITING state Author: Raafat Akkad <raafat.akkad@gmail.com> Closes #10959 from RaafatAkkad/master.
* [HOTFIX] Fix style violation caused by c756bdaAndrew Or2016-02-041-2/+3
|
* [SPARK-12330][MESOS][HOTFIX] Rename timeout configAndrew Or2016-02-041-2/+2
| | | | | | The config already describes time and accepts a general format that is not restricted to ms. This commit renames the internal config to use a format that's consistent in Spark.
* [SPARK-13053][TEST] Unignore tests in InternalAccumulatorSuiteAndrew Or2016-02-042-78/+102
| | | | | | | | | | These were ignored because they are incorrectly written; they don't actually trigger stage retries, which is what the tests are testing. These tests are now rewritten to induce stage retries through fetch failures. Note: there were 2 tests before and now there's only 1. What happened? It turns out that the case where we only resubmit a subset of of the original missing partitions is very difficult to simulate in tests without potentially introducing flakiness. This is because the `DAGScheduler` removes all map outputs associated with a given executor when this happens, and we will need multiple executors to trigger this case, and sometimes the scheduler still removes map outputs from all executors. Author: Andrew Or <andrew@databricks.com> Closes #10969 from andrewor14/unignore-accum-test.
* [SPARK-13162] Standalone mode does not respect initial executorsAndrew Or2016-02-045-6/+34
| | | | | | | | Currently the Master would always set an application's initial executor limit to infinity. If the user specified `spark.dynamicAllocation.initialExecutors`, the config would not take effect. This is similar to #11047 but for standalone mode. Author: Andrew Or <andrew@databricks.com> Closes #11054 from andrewor14/standalone-da-initial.
* [SPARK-13164][CORE] Replace deprecated synchronized buffer in coreHolden Karau2016-02-044-39/+40
| | | | | | | | Building with scala 2.11 results in the warning trait SynchronizedBuffer in package mutable is deprecated: Synchronization via traits is deprecated as it is inherently unreliable. Consider java.util.concurrent.ConcurrentLinkedQueue as an alternative. Investigation shows we are already using ConcurrentLinkedQueue in other locations so switch our uses of SynchronizedBuffer to ConcurrentLinkedQueue. Author: Holden Karau <holden@us.ibm.com> Closes #11059 from holdenk/SPARK-13164-replace-deprecated-synchronized-buffer-in-core.
* [SPARK-12330][MESOS] Fix mesos coarse mode cleanupCharles Allen2016-02-042-2/+45
| | | | | | | | | | | | | | In the current implementation the mesos coarse scheduler does not wait for the mesos tasks to complete before ending the driver. This causes a race where the task has to finish cleaning up before the mesos driver terminates it with a SIGINT (and SIGKILL after 3 seconds if the SIGINT doesn't work). This PR causes the mesos coarse scheduler to wait for the mesos tasks to finish (with a timeout defined by `spark.mesos.coarse.shutdown.ms`) This PR also fixes a regression caused by [SPARK-10987] whereby submitting a shutdown causes a race between the local shutdown procedure and the notification of the scheduler driver disconnection. If the scheduler driver disconnection wins the race, the coarse executor incorrectly exits with status 1 (instead of the proper status 0) With this patch the mesos coarse scheduler terminates properly, the executors clean up, and the tasks are reported as `FINISHED` in the Mesos console (as opposed to `KILLED` in < 1.6 or `FAILED` in 1.6 and later) Author: Charles Allen <charles@allen-net.com> Closes #10319 from drcrallen/SPARK-12330.
* [SPARK-13113] [CORE] Remove unnecessary bit operation when decoding page numberLiang-Chi Hsieh2016-02-031-1/+1
| | | | | | | | | | JIRA: https://issues.apache.org/jira/browse/SPARK-13113 As we shift bits right, looks like the bitwise AND operation is unnecessary. Author: Liang-Chi Hsieh <viirya@gmail.com> Closes #11002 from viirya/improve-decodepagenumber.
* [SPARK-13152][CORE] Fix task metrics deprecation warningHolden Karau2016-02-035-8/+13
| | | | | | | | | | | | Make an internal non-deprecated version of incBytesRead and incRecordsRead so we don't have unecessary deprecation warnings in our build. Right now incBytesRead and incRecordsRead are marked as deprecated and for internal use only. We should make private[spark] versions which are not deprecated and switch to those internally so as to not clutter up the warning messages when building. cc andrewor14 who did the initial deprecation Author: Holden Karau <holden@us.ibm.com> Closes #11056 from holdenk/SPARK-13152-fix-task-metrics-deprecation-warnings.
* [SPARK-13131] [SQL] Use best and average time in benchmarkDavies Liu2016-02-031-14/+24
| | | | | | | | | | | | | | | | | | | | | Best time is stabler than average time, also added a column for nano seconds per row (which could be used to estimate contributions of each components in a query). Having best time and average time together for more information (we can see kind of variance). rate, time per row and relative are all calculated using best time. The result looks like this: ``` Intel(R) Core(TM) i7-4558U CPU 2.80GHz rang/filter/sum: Best/Avg Time(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------- rang/filter/sum codegen=false 14332 / 16646 36.0 27.8 1.0X rang/filter/sum codegen=true 845 / 940 620.0 1.6 17.0X ``` Author: Davies Liu <davies@databricks.com> Closes #11018 from davies/gen_bench.
* [SPARK-3611][WEB UI] Show number of cores for each executor in application ↵Alex Bozarth2016-02-034-2/+12
| | | | | | | | | | web UI Added a Cores column in the Executors UI Author: Alex Bozarth <ajbozart@us.ibm.com> Closes #11039 from ajbozarth/spark3611.
* [SPARK-7997][CORE] Add rpcEnv.awaitTermination() back to SparkEnvShixiong Zhu2016-02-021-0/+1
| | | | | | | | | | | | `rpcEnv.awaitTermination()` was not added in #10854 because some Streaming Python tests hung forever. This patch fixed the hung issue and added rpcEnv.awaitTermination() back to SparkEnv. Previously, Streaming Kafka Python tests shutdowns the zookeeper server before stopping StreamingContext. Then when stopping StreamingContext, KafkaReceiver may be hung due to https://issues.apache.org/jira/browse/KAFKA-601, hence, some thread of RpcEnv's Dispatcher cannot exit and rpcEnv.awaitTermination is hung.The patch just changed the shutdown order to fix it. Author: Shixiong Zhu <shixiong@databricks.com> Closes #11031 from zsxwing/awaitTermination.
* [SPARK-13122] Fix race condition in MemoryStore.unrollSafely()Adam Budde2016-02-021-5/+9
| | | | | | | | | | | | | | | | https://issues.apache.org/jira/browse/SPARK-13122 A race condition can occur in MemoryStore's unrollSafely() method if two threads that return the same value for currentTaskAttemptId() execute this method concurrently. This change makes the operation of reading the initial amount of unroll memory used, performing the unroll, and updating the associated memory maps atomic in order to avoid this race condition. Initial proposed fix wraps all of unrollSafely() in a memoryManager.synchronized { } block. A cleaner approach might be introduce a mechanism that synchronizes based on task attempt ID. An alternative option might be to track unroll/pending unroll memory based on block ID rather than task attempt ID. Author: Adam Budde <budde@amazon.com> Closes #11012 from budde/master.
* [SPARK-12790][CORE] Remove HistoryServer old multiple files formatfelixcheung2016-02-0119-229/+17
| | | | | | | | | Removed isLegacyLogDirectory code path and updated tests andrewor14 Author: felixcheung <felixcheung_m@hotmail.com> Closes #10860 from felixcheung/historyserverformat.
* [SPARK-12637][CORE] Print stage info of finished stages properlySean Owen2016-02-011-1/+12
| | | | | | | | | | Improve printing of StageInfo in onStageCompleted See also https://github.com/apache/spark/pull/10585 Author: Sean Owen <sowen@cloudera.com> Closes #10922 from srowen/SPARK-12637.
* [SPARK-12979][MESOS] Don’t resolve paths on the local file system in Mesos ↵Iulian Dragos2016-02-013-3/+3
| | | | | | | | | | scheduler The driver filesystem is likely different from where the executors will run, so resolving paths (and symlinks, etc.) will lead to invalid paths on executors. Author: Iulian Dragos <jaguarul@gmail.com> Closes #10923 from dragos/issue/canonical-paths.
* [SPARK-12265][MESOS] Spark calls System.exit inside driver instead of ↵Nilanjan Raychaudhuri2016-02-013-4/+19
| | | | | | | | | | | throwing exception This takes over #10729 and makes sure that `spark-shell` fails with a proper error message. There is a slight behavioral change: before this change `spark-shell` would exit, while now the REPL is still there, but `sc` and `sqlContext` are not defined and the error is visible to the user. Author: Nilanjan Raychaudhuri <nraychaudhuri@gmail.com> Author: Iulian Dragos <jaguarul@gmail.com> Closes #10921 from dragos/pr/10729.
* [SPARK-12463][SPARK-12464][SPARK-12465][SPARK-10647][MESOS] Fix zookeeper ↵Timothy Chen2016-02-012-5/+5
| | | | | | | | | | dir with mesos conf and add docs. Fix zookeeper dir configuration used in cluster mode, and also add documentation around these settings. Author: Timothy Chen <tnachen@gmail.com> Closes #10057 from tnachen/fix_mesos_dir.
* [SPARK-6847][CORE][STREAMING] Fix stack overflow issue when updateStateByKey ↵Shixiong Zhu2016-02-012-0/+40
| | | | | | | | | | is followed by a checkpointed dstream Add a local property to indicate if checkpointing all RDDs that are marked with the checkpoint flag, and enable it in Streaming Author: Shixiong Zhu <shixiong@databricks.com> Closes #10934 from zsxwing/recursive-checkpoint.
* [SPARK-13071] Coalescing HadoopRDD overwrites existing input metricsAndrew Or2016-01-292-2/+12
| | | | | | | | | | | | | | | | This issue is causing tests to fail consistently in master with Hadoop 2.6 / 2.7. This is because for Hadoop 2.5+ we overwrite existing values of `InputMetrics#bytesRead` in each call to `HadoopRDD#compute`. In the case of coalesce, e.g. ``` sc.textFile(..., 4).coalesce(2).count() ``` we will call `compute` multiple times in the same task, overwriting `bytesRead` values from previous calls to `compute`. For a regression test, see `InputOutputMetricsSuite.input metrics for old hadoop with coalesce`. I did not add a new regression test because it's impossible without significant refactoring; there's a lot of existing duplicate code in this corner of Spark. This was caused by #10835. Author: Andrew Or <andrew@databricks.com> Closes #10973 from andrewor14/fix-input-metrics-coalesce.
* [SPARK-13088] Fix DAG viz in latest version of chromeAndrew Or2016-01-291-6/+7
| | | | | | | | | | Apparently chrome removed `SVGElement.prototype.getTransformToElement`, which is used by our JS library dagre-d3 when creating edges. The real diff can be found here: https://github.com/andrewor14/dagre-d3/commit/7d6c0002e4c74b82a02c5917876576f71e215590, which is taken from the fix in the main repo: https://github.com/cpettitt/dagre-d3/commit/1ef067f1c6ad2e0980f6f0ca471bce998784b7b2 Upstream issue: https://github.com/cpettitt/dagre-d3/issues/202 Author: Andrew Or <andrew@databricks.com> Closes #10986 from andrewor14/fix-dag-viz.
* [SPARK-13096][TEST] Fix flaky verifyPeakExecutionMemorySetAndrew Or2016-01-291-0/+2
| | | | | | | | | | Previously we would assert things before all events are guaranteed to have been processed. To fix this, just block until all events are actually processed, i.e. until the listener queue is empty. https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7/79/testReport/junit/org.apache.spark.util.collection/ExternalAppendOnlyMapSuite/spilling/ Author: Andrew Or <andrew@databricks.com> Closes #10990 from andrewor14/accum-suite-less-flaky.
* [SPARK-13055] SQLHistoryListener throws ClassCastExceptionAndrew Or2016-01-299-35/+37
| | | | | | | | | | This is an existing issue uncovered recently by #10835. The reason for the exception was because the `SQLHistoryListener` gets all sorts of accumulators, not just the ones that represent SQL metrics. For example, the listener gets the `internal.metrics.shuffleRead.remoteBlocksFetched`, which is an Int, then it proceeds to cast the Int to a Long, which fails. The fix is to mark accumulators representing SQL metrics using some internal metadata. Then we can identify which ones are SQL metrics and only process those in the `SQLHistoryListener`. Author: Andrew Or <andrew@databricks.com> Closes #10971 from andrewor14/fix-sql-history.