aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Preparing Spark release v1.2.0-rc1Patrick Wendell2014-11-2829-29/+29
|
* [SPARK-4584] [yarn] Remove security manager from Yarn AM.Marcelo Vanzin2014-11-281-46/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The security manager adds a lot of overhead to the runtime of the app, and causes a severe performance regression. Even stubbing out all unneeded methods (all except checkExit()) does not help. So, instead, penalize users who do an explicit System.exit() by leaving them in "undefined behavior" territory: if they do that, the Yarn backend won't be able to report the final app status to the RM. The result is that the final status of the application might not match the user's expectations. One side-effect of the change is that users who do an explicit System.exit() will lose the AM retry functionality. Since there is no way to know if the exit was because of success or failure, the AM right now errs on the side of it being a successful exit. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #3484 from vanzin/SPARK-4584 and squashes the following commits: 21f2502 [Marcelo Vanzin] Do not retry apps that use System.exit(). 4198b3b [Marcelo Vanzin] [SPARK-4584] [yarn] Remove security manager from Yarn AM. (cherry picked from commit 915f8eeb3a493a0bb4b8d05d795ddd21f373d2ff) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4193][BUILD] Disable doclint in Java 8 to prevent from build error.Takuya UESHIN2014-11-287-6/+35
| | | | | | | | | | | | | | | | | Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3058 from ueshin/issues/SPARK-4193 and squashes the following commits: e096bb1 [Takuya UESHIN] Add a plugin declaration to pluginManagement. 6762ec2 [Takuya UESHIN] Fix usage of -Xdoclint javadoc option. fdb280a [Takuya UESHIN] Fix Javadoc errors. 4745f3c [Takuya UESHIN] Merge branch 'master' into issues/SPARK-4193 923e2f0 [Takuya UESHIN] Use doclint option `-missing` instead of `none`. 30d6718 [Takuya UESHIN] Fix Javadoc errors. b548017 [Takuya UESHIN] Disable doclint in Java 8 to prevent from build error. (cherry picked from commit e464f0ac2d7210a4bf715478885fe7a8d397fe89) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4645][SQL] Disables asynchronous execution in Hive 0.13.1 ↵Cheng Lian2014-11-281-100/+39
| | | | | | | | | | | | | | | | HiveThriftServer2 This PR disables HiveThriftServer2 asynchronous execution by setting `runInBackground` argument in `ExecuteStatementOperation` to `false`, and reverting `SparkExecuteStatementOperation.run` in Hive 13 shim to Hive 12 version. This change makes Simba ODBC driver v1.0.0.1000 work. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3506) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3506 from liancheng/disable-async-exec and squashes the following commits: 593804d [Cheng Lian] Disables asynchronous execution in Hive 0.13.1 HiveThriftServer2
* [SPARK-4308][SQL] Sets SQL operation state to ERROR when exception is thrownCheng Lian2014-11-283-29/+21
| | | | | | | | | | In `HiveThriftServer2`, when an exception is thrown during a SQL execution, the SQL operation state should be set to `ERROR`, but now it remains `RUNNING`. This affects the result of the `GetOperationStatus` Thrift API. Author: Cheng Lian <lian@databricks.com> Closes #3175 from liancheng/fix-op-state and squashes the following commits: 6d4c1fe [Cheng Lian] Sets SQL operation state to ERROR when exception is thrown
* [SPARK-4619][Storage]delete redundant time suffixmaji20142014-11-281-1/+1
| | | | | | | | | | | | | Time suffix exists in Utils.getUsedTimeMs(startTime), no need to append again, delete that Author: maji2014 <maji3@asiainfo.com> Closes #3475 from maji2014/SPARK-4619 and squashes the following commits: df0da4e [maji2014] delete redundant time suffix (cherry picked from commit ceb628197099e6c598cde1564ed9c1c3681ea955) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4613][Core] Java API for JdbcRDDCheng Lian2014-11-273-5/+204
| | | | | | | | | | | | | | | | | | | | | | This PR introduces a set of Java APIs for using `JdbcRDD`: 1. Trait (interface) `JdbcRDD.ConnectionFactory`: equivalent to the `getConnection: () => Connection` parameter in `JdbcRDD` constructor. 2. Two overloaded versions of `Jdbc.create`: used to create `JavaRDD` that wraps a `JdbcRDD`. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3478) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3478 from liancheng/japi-jdbc-rdd and squashes the following commits: 9a54625 [Cheng Lian] Only shutdowns a single DB rather than the whole Derby driver d4cedc5 [Cheng Lian] Moves Java JdbcRDD test case to a separate test suite ffcdf2e [Cheng Lian] Java API for JdbcRDD (cherry picked from commit 120a350240f58196eafcb038ca3a353636d89239) Signed-off-by: Matei Zaharia <matei@databricks.com>
* [SPARK-4626] Kill a task only if the executorId is (still) registered with ↵roxchkplusony2014-11-271-1/+7
| | | | | | | | | | | | | | the scheduler Author: roxchkplusony <roxchkplusony@gmail.com> Closes #3483 from roxchkplusony/bugfix/4626 and squashes the following commits: aba9184 [roxchkplusony] replace warning message per review 5e7fdea [roxchkplusony] [SPARK-4626] Kill a task only if the executorId is (still) registered with the scheduler (cherry picked from commit 84376d31392858f7df215ddb3f05419181152e68) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [Release] Automate generation of contributors listAndrew Or2014-11-262-0/+330
| | | | | | This commit provides a script that computes the contributors list by linking the github commits with JIRA issues. Automatically translating github usernames remains a TODO at this point.
* [SPARK-732][SPARK-3628][CORE][RESUBMIT] eliminate duplicate update on accmulatorCodingCat2014-11-264-30/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://issues.apache.org/jira/browse/SPARK-3628 In current implementation, the accumulator will be updated for every successfully finished task, even the task is from a resubmitted stage, which makes the accumulator counter-intuitive In this patch, I changed the way for the DAGScheduler to update the accumulator, DAGScheduler maintains a HashTable, mapping the stage id to the received <accumulator_id , value> pairs. Only when the stage becomes independent, (no job needs it any more), we accumulate the values of the <accumulator_id , value> pairs, when a task finished, we check if the HashTable has contained such stageId, it saves the accumulator_id, value only when the task is the first finished task of a new stage or the stage is running for the first attempt... Author: CodingCat <zhunansjtu@gmail.com> Closes #2524 from CodingCat/SPARK-732-1 and squashes the following commits: 701a1e8 [CodingCat] roll back change on Accumulator.scala 1433e6f [CodingCat] make MIMA happy b233737 [CodingCat] address Matei's comments 02261b8 [CodingCat] rollback some changes 6b0aff9 [CodingCat] update document 2b2e8cf [CodingCat] updateAccumulator 83b75f8 [CodingCat] style fix 84570d2 [CodingCat] re-enable the bad accumulator guard 1e9e14d [CodingCat] add NPE guard 21b6840 [CodingCat] simplify the patch 88d1f03 [CodingCat] fix rebase error f74266b [CodingCat] add test case for resubmitted result stage 5cf586f [CodingCat] de-duplicate on task level 138f9b3 [CodingCat] make MIMA happy 67593d2 [CodingCat] make if allowing duplicate update as an option of accumulator (cherry picked from commit 5af53ada65f62e6b5987eada288fb48e9211ef9d) Signed-off-by: Matei Zaharia <matei@databricks.com>
* [BRANCH-1.2][SPARK-4583][MLLIB] LogLoss for GradientBoostedTrees fix + doc ↵Joseph K. Bradley2014-11-266-70/+147
| | | | | | | | | | | | | | | | updates We reverted #3439 in branch-1.2 due to missing `import o.a.s.SparkContext._`, which is no longer needed in master (#3262). This PR adds #3439 back to branch-1.2 with correct imports. Github is out-of-sync now. The real changes are the last two commits. Author: Joseph K. Bradley <joseph@databricks.com> Author: Xiangrui Meng <meng@databricks.com> Closes #3474 from mengxr/SPARK-4583-1.2 and squashes the following commits: aca2abb [Xiangrui Meng] add import o.a.s.SparkContext._ for v1.2 6b5564a [Joseph K. Bradley] [SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updates
* [BRANCH-1.2][SPARK-4614][MLLIB] Slight API changes in Matrix and MatricesXiangrui Meng2014-11-262-11/+59
| | | | | | | | | | This is #3468 for branch-1.2, same content except mima excludes. Author: Xiangrui Meng <meng@databricks.com> Closes #3482 from mengxr/SPARK-4614-1.2 and squashes the following commits: ea4f08d [Xiangrui Meng] hide transposeMultiply; add rng to rand and randn; add unit tests
* [BRANCH-1.2][SPARK-4604][MLLIB] make MatrixFactorizationModel publicXiangrui Meng2014-11-262-2/+81
| | | | | | | | | | | | | We reverted #3459 in branch-1.2 due to missing `import o.a.s.SparkContext._`, which is no longer needed in master (#3262). This PR adds #3459 back to branch-1.2 with correct imports. Github is out-of-sync now. The real changes are the last two commits. Author: Xiangrui Meng <meng@databricks.com> Closes #3473 from mengxr/SPARK-4604-1.2 and squashes the following commits: a7638a5 [Xiangrui Meng] add import o.a.s.SparkContext._ for v1.2 b749000 [Xiangrui Meng] [SPARK-4604][MLLIB] make MatrixFactorizationModel public
* Removing confusing TripletFieldsJoseph E. Gonzalez2014-11-264-33/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After additional discussion with rxin, I think having all the possible `TripletField` options is confusing. This pull request reduces the triplet fields to: ```java /** * None of the triplet fields are exposed. */ public static final TripletFields None = new TripletFields(false, false, false); /** * Expose only the edge field and not the source or destination field. */ public static final TripletFields EdgeOnly = new TripletFields(false, false, true); /** * Expose the source and edge fields but not the destination field. (Same as Src) */ public static final TripletFields Src = new TripletFields(true, false, true); /** * Expose the destination and edge fields but not the source field. (Same as Dst) */ public static final TripletFields Dst = new TripletFields(false, true, true); /** * Expose all the fields (source, edge, and destination). */ public static final TripletFields All = new TripletFields(true, true, true); ``` Author: Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com> Closes #3472 from jegonzal/SimplifyTripletFields and squashes the following commits: 91796b5 [Joseph E. Gonzalez] removing confusing triplet fields (cherry picked from commit 288ce583b05004a8c71dcd836fab23caff5d4ba7) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4612] Reduce task latency and increase scheduling throughput by ↵Tathagata Das2014-11-251-1/+1
| | | | | | | | | | | | | | | | | | making configuration initialization lazy https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L337 creates a configuration object for every task that is launched, even if there is no new dependent file/JAR to update. This is a heavy-weight creation that should be avoided if there is no new file/JAR to update. This PR makes that creation lazy. Quick local test in spark-perf scheduling throughput tests gives the following numbers in a local standalone scheduler mode. 1 job with 10000 tasks: before 7.8395 seconds, after 2.6415 seconds = 3x increase in task scheduling throughput pwendell JoshRosen Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #3463 from tdas/lazy-config and squashes the following commits: c791c1e [Tathagata Das] Reduce task latency by making configuration initialization lazy (cherry picked from commit e7f4d2534bb3361ec4b7af0d42bc798a7a425226) Signed-off-by: Reynold Xin <rxin@databricks.com>
* Revert "[SPARK-4604][MLLIB] make MatrixFactorizationModel public"Xiangrui Meng2014-11-252-81/+3
| | | | This reverts commit 2756d0de91d996f80c0b0883cad1d2fab336ed84.
* Revert "[SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updates"Patrick Wendell2014-11-266-146/+72
| | | | This reverts commit 6880b467f66a4906161cbc343e70d975056a4f5f.
* Revert "Preparing Spark release v1.2.0-rc1"Patrick Wendell2014-11-2629-29/+29
| | | | This reverts commit cc2c05e4ee81d2f34873a2ebb9a5272867cb65c2.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-2629-30/+30
| | | | This reverts commit 380eba5f49eca1dbd4084e6c84e19866fffd4efa.
* [SPARK-4516] Avoid allocating Netty PooledByteBufAllocators unnecessarilyAaron Davidson2014-11-262-10/+8
| | | | | | | | | | | | | | | Turns out we are allocating an allocator pool for every TransportClient (which means that the number increases with the number of nodes in the cluster), when really we should just reuse one for all clients. This patch, as expected, greatly decreases off-heap memory allocation, and appears to make allocation only proportional to the number of cores. Author: Aaron Davidson <aaron@databricks.com> Closes #3465 from aarondav/fewer-pools and squashes the following commits: 36c49da [Aaron Davidson] [SPARK-4516] Avoid allocating unnecessarily Netty PooledByteBufAllocators (cherry picked from commit 346bc17a2ec8fc9e6eaff90733aa1e8b6b46883e) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* Preparing development version 1.2.1-SNAPSHOTPatrick Wendell2014-11-2629-30/+30
|
* Preparing Spark release v1.2.0-rc1Patrick Wendell2014-11-2629-29/+29
|
* HOTFIX: Updating additional version dataPatrick Wendell2014-11-262-2/+5
|
* Revert "Preparing Spark release v1.2.0-rc1"Patrick Wendell2014-11-2629-29/+29
| | | | This reverts commit 5247dd859b95a440baa562b9827bdeb26aa6530e.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-2629-30/+30
| | | | This reverts commit 79df6b43ae762263a8120f423ddb4a0811dd4b6f.
* Preparing development version 1.2.1-SNAPSHOTPatrick Wendell2014-11-2629-30/+30
|
* Preparing Spark release v1.2.0-rc1Patrick Wendell2014-11-2629-29/+29
|
* Revert "Preparing Spark release v1.2.0-rc1"Patrick Wendell2014-11-2629-29/+29
| | | | This reverts commit db7f4a898af22a02b36428507f8ef2b429d78dc1.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-2629-30/+30
| | | | This reverts commit d7b1ecb25676d228deb6fe05efdb4e2ab9c3e30b.
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-2629-30/+30
|
* Preparing Spark release v1.2.0-rc1Ubuntu2014-11-2629-29/+29
|
* Revert "Preparing Spark release v1.2.0-snapshot1"Patrick Wendell2014-11-2630-30/+30
| | | | This reverts commit 38c1fbd9694430cefd962c90bc36b0d108c6124b.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-2630-30/+30
| | | | This reverts commit d7ac6013483e83caff8ea54c228f37aeca159db8.
* [SPARK-4516] Cap default number of Netty threads at 8Aaron Davidson2014-11-251-7/+37
| | | | | | | | | | | | | | | In practice, only 2-4 cores should be required to transfer roughly 10 Gb/s, and each core that we use will have an initial overhead of roughly 32 MB of off-heap memory, which comes at a premium. Thus, this value should still retain maximum throughput and reduce wasted off-heap memory allocation. It can be overridden by setting the number of serverThreads and clientThreads manually in Spark's configuration. Author: Aaron Davidson <aaron@databricks.com> Closes #3469 from aarondav/fewer-pools2 and squashes the following commits: 087c59f [Aaron Davidson] [SPARK-4516] Cap default number of Netty threads at 8 (cherry picked from commit f5f2d27385c243959f03a9d78a149d5f405b2f50) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4604][MLLIB] make MatrixFactorizationModel publicXiangrui Meng2014-11-252-3/+81
| | | | | | | | | | | | | | | User could construct an MF model directly. I added a note about the performance. Author: Xiangrui Meng <meng@databricks.com> Closes #3459 from mengxr/SPARK-4604 and squashes the following commits: f64bcd3 [Xiangrui Meng] organize imports ed08214 [Xiangrui Meng] check preconditions and unit tests a624c12 [Xiangrui Meng] make MatrixFactorizationModel public (cherry picked from commit b5fb1410c5eed1156decb4f9fcc22436a658ce4d) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [HOTFIX]: Adding back without-hive distPatrick Wendell2014-11-251-0/+1
|
* [SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updatesJoseph K. Bradley2014-11-256-72/+146
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the LogLoss used by GradientBoostedTrees has 2 issues: * the gradient (and therefore loss) does not match that used by Friedman (1999) * the error computation uses 0/1 accuracy, not log loss This PR updates LogLoss. It also adds some doc for boosting and forests. I tested it on sample data and made sure the log loss is monotonically decreasing with each boosting iteration. CC: mengxr manishamde codedeft Author: Joseph K. Bradley <joseph@databricks.com> Closes #3439 from jkbradley/gbt-loss-fix and squashes the following commits: cfec17e [Joseph K. Bradley] removed forgotten temp comments a27eb6d [Joseph K. Bradley] corrections to last log loss commit ed5da2c [Joseph K. Bradley] updated LogLoss (boosting) for numerical stability 5e52bff [Joseph K. Bradley] * Removed the 1/2 from SquaredError. This also required updating the test suite since it effectively doubles the gradient and loss. * Added doc for developers within RandomForest. * Small cleanup in test suite (generating data only once) e57897a [Joseph K. Bradley] Fixed LogLoss for GradientBoostedTrees, and updated doc for losses, forests, and boosting (cherry picked from commit c251fd7405db57d3ab2686c38712601fd8f13ccd) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [Spark-4509] Revert EC2 tag-based cluster membership patchXiangrui Meng2014-11-252-69/+28
| | | | | | | | | | | | | | | | | | | | | | | | | This PR reverts changes related to tag-based cluster membership. As discussed in SPARK-3332, we didn't figure out a safe strategy to use tags to determine cluster membership, because tagging is not atomic. The following changes are reverted: SPARK-2333: 94053a7b766788bb62e2dbbf352ccbcc75f71fc0 SPARK-3213: 7faf755ae4f0cf510048e432340260a6e609066d SPARK-3608: 78d4220fa0bf2f9ee663e34bbf3544a5313b02f0. I tested launch, login, and destroy. It is easy to check the diff by comparing it to Josh's patch for branch-1.1: https://github.com/apache/spark/pull/2225/files JoshRosen I sent the PR to master. It might be easier for us to keep master and branch-1.2 the same at this time. We can always re-apply the patch once we figure out a stable solution. Author: Xiangrui Meng <meng@databricks.com> Closes #3453 from mengxr/SPARK-4509 and squashes the following commits: f0b708b [Xiangrui Meng] revert 94053a7b766788bb62e2dbbf352ccbcc75f71fc0 4298ea5 [Xiangrui Meng] revert 7faf755ae4f0cf510048e432340260a6e609066d 35963a1 [Xiangrui Meng] Revert "SPARK-3608 Break if the instance tag naming succeeds" (cherry picked from commit 7eba0fbe456c451122d7a2353ff0beca00f15223) Signed-off-by: Andrew Or <andrew@databricks.com>
* Fix SPARK-4471: blockManagerIdFromJson function throws exception while B...hushan[胡珊]2014-11-252-3/+16
| | | | | | | | | | | | | | | Fix [SPARK-4471](https://issues.apache.org/jira/browse/SPARK-4471): blockManagerIdFromJson function throws exception while BlockManagerId be null in MetadataFetchFailedException Author: hushan[胡珊] <hushan@xiaomi.com> Closes #3340 from suyanNone/fix-blockmanagerId-jnothing-2 and squashes the following commits: 159f9a3 [hushan[胡珊]] Refine test code for blockmanager is null 4380d73 [hushan[胡珊]] remove useless blank line 3ccf651 [hushan[胡珊]] Fix SPARK-4471: blockManagerIdFromJson function throws exception while metadata fetch failed (cherry picked from commit 9bdf5da59036c0b052df756fc4a28d64677072e7) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4546] Improve HistoryServer first time user experienceAndrew Or2014-11-254-22/+40
| | | | | | | | | | | | | | | | | | | | | | The documentation points the user to run the following ``` sbin/start-history-server.sh ``` The first thing this does is throw an exception that complains a log directory is not specified. The exception message itself does not say anything about what to set. Instead we should have a default and a landing page with a better message. The new default log directory is `file:/tmp/spark-events`. This is what it looks like as of this PR: ![after](https://issues.apache.org/jira/secure/attachment/12682985/after.png) Author: Andrew Or <andrew@databricks.com> Closes #3411 from andrewor14/minor-history-improvements and squashes the following commits: f33d6b3 [Andrew Or] Point user to set config if default log dir does not exist fc4c17a [Andrew Or] Improve HistoryServer UX (cherry picked from commit 9afcbe494a3535a9bf7958429b72e989972f82d9) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4592] Avoid duplicate worker registrations in standalone modeAndrew Or2014-11-252-7/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | **Summary.** On failover, the Master may receive duplicate registrations from the same worker, causing the worker to exit. This is caused by this commit https://github.com/apache/spark/commit/4afe9a4852ebeb4cc77322a14225cd3dec165f3f, which adds logic for the worker to re-register with the master in case of failures. However, the following race condition may occur: (1) Master A fails and Worker attempts to reconnect to all masters (2) Master B takes over and notifies Worker (3) Worker responds by registering with Master B (4) Meanwhile, Worker's previous reconnection attempt reaches Master B, causing the same Worker to register with Master B twice **Fix.** Instead of attempting to register with all known masters, the worker should re-register with only the one that it has been communicating with. This is safe because the fact that a failover has occurred means the old master must have died. Then, when the worker is finally notified of a new master, it gives up on the old one in favor of the new one. **Caveat.** Even this fix is subject to more obscure race conditions. For instance, if Master B fails and Master A recovers immediately, then Master A may still observe duplicate worker registrations. However, this and other potential race conditions summarized in [SPARK-4592](https://issues.apache.org/jira/browse/SPARK-4592), are much, much less likely than the one described above, which is deterministically reproducible. Author: Andrew Or <andrew@databricks.com> Closes #3447 from andrewor14/standalone-failover and squashes the following commits: 0d9716c [Andrew Or] Move re-registration logic to actor for thread-safety 79286dc [Andrew Or] Preserve old behavior for initial retries 83b321c [Andrew Or] Tweak wording 1fce6a9 [Andrew Or] Active master actor could be null in the beginning b6f269e [Andrew Or] Avoid duplicate worker registrations (cherry picked from commit 1b2ab1cd1b7cab9076f3c511188a610eda935701) Signed-off-by: Andrew Or <andrew@databricks.com>
* [HOTFIX] Fixing broken build due to missing imports.Tathagata Das2014-11-251-0/+1
|
* [SPARK-4196][SPARK-4602][Streaming] Fix serialization issue in ↵Tathagata Das2014-11-252-16/+70
| | | | | | | | | | | | | | | | | | PairDStreamFunctions.saveAsNewAPIHadoopFiles Solves two JIRAs in one shot - Makes the ForechDStream created by saveAsNewAPIHadoopFiles serializable for checkpoints - Makes the default configuration object used saveAsNewAPIHadoopFiles be the Spark's hadoop configuration Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #3457 from tdas/savefiles-fix and squashes the following commits: bb4729a [Tathagata Das] Same treatment for saveAsHadoopFiles b382ea9 [Tathagata Das] Fix serialization issue in PairDStreamFunctions.saveAsNewAPIHadoopFiles. (cherry picked from commit 8838ad7c135a585cde015dc38b5cb23314502dd9) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
* [SPARK-4581][MLlib] Refactorize StandardScaler to improve the transformation ↵DB Tsai2014-11-251-20/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | performance The following optimizations are done to improve the StandardScaler model transformation performance. 1) Covert Breeze dense vector to primitive vector to reduce the overhead. 2) Since mean can be potentially a sparse vector, we explicitly convert it to dense primitive vector. 3) Have a local reference to `shift` and `factor` array so JVM can locate the value with one operation call. 4) In pattern matching part, we use the mllib SparseVector/DenseVector instead of breeze's vector to make the codebase cleaner. Benchmark with mnist8m dataset: Before, DenseVector withMean and withStd: 50.97secs DenseVector withMean and withoutStd: 42.11secs DenseVector withoutMean and withStd: 8.75secs SparseVector withoutMean and withStd: 5.437secs With this PR, DenseVector withMean and withStd: 5.76secs DenseVector withMean and withoutStd: 5.28secs DenseVector withoutMean and withStd: 5.30secs SparseVector withoutMean and withStd: 1.27secs Note that without the local reference copy of `factor` and `shift` arrays, the runtime is almost three time slower. DenseVector withMean and withStd: 18.15secs DenseVector withMean and withoutStd: 18.05secs DenseVector withoutMean and withStd: 18.54secs SparseVector withoutMean and withStd: 2.01secs The following code, ```scala while (i < size) { values(i) = (values(i) - shift(i)) * factor(i) i += 1 } ``` will generate the bytecode ``` L13 LINENUMBER 106 L13 FRAME FULL [org/apache/spark/mllib/feature/StandardScalerModel org/apache/spark/mllib/linalg/Vector org/apache/spark/mllib/linalg/Vector org/apache/spark/mllib/linalg/DenseVector T [D I I] [] ILOAD 7 ILOAD 6 IF_ICMPGE L14 L15 LINENUMBER 107 L15 ALOAD 5 ILOAD 7 ALOAD 5 ILOAD 7 DALOAD ALOAD 0 INVOKESPECIAL org/apache/spark/mllib/feature/StandardScalerModel.shift ()[D ILOAD 7 DALOAD DSUB ALOAD 0 INVOKESPECIAL org/apache/spark/mllib/feature/StandardScalerModel.factor ()[D ILOAD 7 DALOAD DMUL DASTORE L16 LINENUMBER 108 L16 ILOAD 7 ICONST_1 IADD ISTORE 7 GOTO L13 ``` , while with the local reference of the `shift` and `factor` arrays, the bytecode will be ``` L14 LINENUMBER 107 L14 ALOAD 0 INVOKESPECIAL org/apache/spark/mllib/feature/StandardScalerModel.factor ()[D ASTORE 9 L15 LINENUMBER 108 L15 FRAME FULL [org/apache/spark/mllib/feature/StandardScalerModel org/apache/spark/mllib/linalg/Vector [D org/apache/spark/mllib/linalg/Vector org/apache/spark/mllib/linalg/DenseVector T [D I I [D] [] ILOAD 8 ILOAD 7 IF_ICMPGE L16 L17 LINENUMBER 109 L17 ALOAD 6 ILOAD 8 ALOAD 6 ILOAD 8 DALOAD ALOAD 2 ILOAD 8 DALOAD DSUB ALOAD 9 ILOAD 8 DALOAD DMUL DASTORE L18 LINENUMBER 110 L18 ILOAD 8 ICONST_1 IADD ISTORE 8 GOTO L15 ``` You can see that with local reference, the both of the arrays will be in the stack, so JVM can access the value without calling `INVOKESPECIAL`. Author: DB Tsai <dbtsai@alpinenow.com> Closes #3435 from dbtsai/standardscaler and squashes the following commits: 85885a9 [DB Tsai] revert to have lazy in shift array. daf2b06 [DB Tsai] Address the feedback cdb5cef [DB Tsai] small change 9c51eef [DB Tsai] style fc795e4 [DB Tsai] update 5bffd3d [DB Tsai] first commit (cherry picked from commit bf1a6aaac577757a293a573fe8eae9669697310a) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4601][Streaming] Set correct call site for streaming jobs so that it ↵Tathagata Das2014-11-252-1/+6
| | | | | | | | | | | | | | | is displayed correctly on the Spark UI When running the NetworkWordCount, the description of the word count jobs are set as "getCallsite at DStream:xxx" . This should be set to the line number of the streaming application that has the output operation that led to the job being created. This is because the callsite is incorrectly set in the thread launching the jobs. This PR fixes that. Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #3455 from tdas/streaming-callsite-fix and squashes the following commits: 69fc26f [Tathagata Das] Set correct call site for streaming jobs so that it is displayed correctly on the Spark UI (cherry picked from commit 69cd53eae205eb10d52eaf38466db58a23b6ae81) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
* [SPARK-4344][DOCS] adding documentation on spark.yarn.user.classpath.firstarahuja2014-11-251-0/+1
| | | | | | | | | | | | | The documentation for the two parameters is the same with a pointer from the standalone parameter to the yarn parameter Author: arahuja <aahuja11@gmail.com> Closes #3209 from arahuja/yarn-classpath-first-param and squashes the following commits: 51cb9b2 [arahuja] [SPARK-4344][DOCS] adding documentation for YARN on userClassPathFirst (cherry picked from commit d240760191f692ee7b88dfc82f06a31a340a88a2) Signed-off-by: Thomas Graves <tgraves@apache.org>
* [SPARK-4381][Streaming]Add warning log when user set spark.master to local ↵jerryshao2014-11-251-0/+5
| | | | | | | | | | | | | | | in Spark Streaming and there's no job executed Author: jerryshao <saisai.shao@intel.com> Closes #3244 from jerryshao/SPARK-4381 and squashes the following commits: d2486c7 [jerryshao] Improve the warning log d726e85 [jerryshao] Add local[1] to the filter condition eca428b [jerryshao] Add warning log (cherry picked from commit fef27b29431c2adadc17580f26c23afa6a3bd1d2) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
* [SPARK-4535][Streaming] Fix the error in commentsq002515982014-11-256-9/+9
| | | | | | | | | | | | | | | | | change `NetworkInputDStream` to `ReceiverInputDStream` change `ReceiverInputTracker` to `ReceiverTracker` Author: q00251598 <qiyadong@huawei.com> Closes #3400 from watermen/fix-comments and squashes the following commits: 75d795c [q00251598] change 'NetworkInputDStream' to 'ReceiverInputDStream' && change 'ReceiverInputTracker' to 'ReceiverTracker' (cherry picked from commit a51118a34a4617c07373480c4b021e53124c3c00) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Conflicts: examples/src/main/scala/org/apache/spark/examples/streaming/StatefulNetworkWordCount.scala
* [SPARK-4526][MLLIB]GradientDescent get a wrong gradient value according to ↵GuoQiang Li2014-11-251-19/+26
| | | | | | | | | | | | | | | | | | | | the gradient formula. This is caused by the miniBatchSize parameter.The number of `RDD.sample` returns is not fixed. cc mengxr Author: GuoQiang Li <witgo@qq.com> Closes #3399 from witgo/GradientDescent and squashes the following commits: 13cb228 [GuoQiang Li] review commit 668ab66 [GuoQiang Li] Double to Long b6aa11a [GuoQiang Li] Check miniBatchSize is greater than 0 0b5c3e3 [GuoQiang Li] Minor fix 12e7424 [GuoQiang Li] GradientDescent get a wrong gradient value according to the gradient formula, which is caused by the miniBatchSize parameter. (cherry picked from commit f515f9432b05f7e090b651c5536aa706d1cde487) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4596][MLLib] Refactorize Normalizer to make code cleanerDB Tsai2014-11-251-10/+19
| | | | | | | | | | | | | | | | | | In this refactoring, the performance will be slightly increased due to removing the overhead from breeze vector. The bottleneck is still in breeze norm which is implemented by activeIterator. This inefficiency of breeze norm will be addressed in next PR. At least, this PR makes the code more consistent in the codebase. Author: DB Tsai <dbtsai@alpinenow.com> Closes #3446 from dbtsai/normalizer and squashes the following commits: e20a2b9 [DB Tsai] first commit (cherry picked from commit 89f912264603741c7d980135c26102d63e11791f) Signed-off-by: Xiangrui Meng <meng@databricks.com>