aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* [SPARK-4611][MLlib] Implement the efficient vector normDB Tsai2014-12-024-6/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vector norm in breeze is implemented by `activeIterator` which is known to be very slow. In this PR, an efficient vector norm is implemented, and with this API, `Normalizer` and `k-means` have big performance improvement. Here is the benchmark against mnist8m dataset. a) `Normalizer` Before DenseVector: 68.25secs SparseVector: 17.01secs With this PR DenseVector: 12.71secs SparseVector: 2.73secs b) `k-means` Before DenseVector: 83.46secs SparseVector: 61.60secs With this PR DenseVector: 70.04secs SparseVector: 59.05secs Author: DB Tsai <dbtsai@alpinenow.com> Closes #3462 from dbtsai/norm and squashes the following commits: 63c7165 [DB Tsai] typo 0c3637f [DB Tsai] add import org.apache.spark.SparkContext._ back 6fa616c [DB Tsai] address feedback 9b7cb56 [DB Tsai] move norm to static method 0b632e6 [DB Tsai] kmeans dbed124 [DB Tsai] style c1a877c [DB Tsai] first commit
* MAINTENANCE: Automated closing of pull requests.Patrick Wendell2014-12-010-0/+0
| | | | | | | | | | | | This commit exists to close the following pull requests on Github: Closes #1612 (close requested by 'marmbrus') Closes #2723 (close requested by 'marmbrus') Closes #1737 (close requested by 'marmbrus') Closes #2252 (close requested by 'marmbrus') Closes #2029 (close requested by 'marmbrus') Closes #2386 (close requested by 'marmbrus') Closes #2997 (close requested by 'marmbrus')
* [SPARK-4268][SQL] Use #::: to get benefit from Stream in ↵zsxwing2014-12-011-2/+2
| | | | | | | | | | | | SqlLexical.allCaseVersions In addition, using `s.isEmpty` to eliminate the string comparison. Author: zsxwing <zsxwing@gmail.com> Closes #3132 from zsxwing/SPARK-4268 and squashes the following commits: 358e235 [zsxwing] Improvement of allCaseVersions
* [SPARK-4529] [SQL] support view with column aliasDaoyuan Wang2014-12-012-3/+3
| | | | | | | | | | | | | | | | Support view definition like CREATE VIEW view3(valoo) TBLPROPERTIES ("fear" = "factor") AS SELECT upper(value) FROM src WHERE key=86; [valoo as the alias of upper(value)]. This is missing part of SPARK-4239, for a fully view support. Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3396 from adrian-wang/viewcolumn and squashes the following commits: 4d001d0 [Daoyuan Wang] support view with column alias
* [SQL][DOC] Date type in SQL programming guideDaoyuan Wang2014-12-011-0/+23
| | | | | | | | Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3535 from adrian-wang/datedoc and squashes the following commits: 18ff1ed [Daoyuan Wang] [DOC] Date type
* [SQL] Minor fix for doc and commentwangfei2014-12-013-5/+7
| | | | | | | | Author: wangfei <wangfei1@huawei.com> Closes #3533 from scwf/sql-doc1 and squashes the following commits: 962910b [wangfei] doc and comment fix
* [SPARK-4658][SQL] Code documentation issue in DDL of datasource APIravipesala2014-12-012-3/+3
| | | | | | | | | Author: ravipesala <ravindra.pesala@huawei.com> Closes #3516 from ravipesala/ddl_doc and squashes the following commits: d101fdf [ravipesala] Style issues fixed d2238cd [ravipesala] Corrected documentation
* [SPARK-4650][SQL] Supporting multi column support in countDistinct function ↵ravipesala2014-12-012-1/+9
| | | | | | | | | | | | | | like count(distinct c1,c2..) in Spark SQL Supporting multi column support in countDistinct function like count(distinct c1,c2..) in Spark SQL Author: ravipesala <ravindra.pesala@huawei.com> Author: Michael Armbrust <michael@databricks.com> Closes #3511 from ravipesala/countdistinct and squashes the following commits: cc4dbb1 [ravipesala] style 070e12a [ravipesala] Supporting multi column support in count(distinct c1,c2..) in Spark SQL
* [SPARK-4358][SQL] Let BigDecimal do checking type compatibilityLiang-Chi Hsieh2014-12-011-8/+3
| | | | | | | | | | | | | | Remove hardcoding max and min values for types. Let BigDecimal do checking type compatibility. Author: Liang-Chi Hsieh <viirya@gmail.com> Closes #3208 from viirya/more_numericLit and squashes the following commits: e9834b4 [Liang-Chi Hsieh] Remove byte and short types for number literal. 1bd1825 [Liang-Chi Hsieh] Fix Indentation and make the modification clearer. cf1a997 [Liang-Chi Hsieh] Modified for comment to add a rule of analysis that adds a cast. 91fe489 [Liang-Chi Hsieh] add Byte and Short. 1bdc69d [Liang-Chi Hsieh] Let BigDecimal do checking type compatibility.
* [SQL] add @group tab in limit() and count()Jacky Li2014-12-011-0/+4
| | | | | | | | | | group tab is missing for scaladoc Author: Jacky Li <jacky.likun@gmail.com> Closes #3458 from jackylk/patch-7 and squashes the following commits: 0121a70 [Jacky Li] add @group tab in limit() and count()
* [SPARK-4258][SQL][DOC] Documents spark.sql.parquet.filterPushdownCheng Lian2014-12-011-6/+16
| | | | | | | | | | | | | | Documents `spark.sql.parquet.filterPushdown`, explains why it's turned off by default and when it's safe to be turned on. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3440) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3440 from liancheng/parquet-filter-pushdown-doc and squashes the following commits: 2104311 [Cheng Lian] Documents spark.sql.parquet.filterPushdown
* Documentation: add description for repartitionAndSortWithinPartitionsMadhu Siddalingaiah2014-12-011-0/+6
| | | | | | | | | | | Author: Madhu Siddalingaiah <madhu@madhu.com> Closes #3390 from msiddalingaiah/master and squashes the following commits: cbccbfe [Madhu Siddalingaiah] Documentation: replace <b> with <code> (again) 332f7a2 [Madhu Siddalingaiah] Documentation: replace <b> with <code> cd2b05a [Madhu Siddalingaiah] Merge remote-tracking branch 'upstream/master' 0fc12d7 [Madhu Siddalingaiah] Documentation: add description for repartitionAndSortWithinPartitions
* [SPARK-4661][Core] Minor code and docs cleanupzsxwing2014-12-013-3/+2
| | | | | | | | Author: zsxwing <zsxwing@gmail.com> Closes #3521 from zsxwing/SPARK-4661 and squashes the following commits: 03cbe3f [zsxwing] Minor code and docs cleanup
* [SPARK-4664][Core] Throw an exception when spark.akka.frameSize > 2047zsxwing2014-12-011-1/+8
| | | | | | | | | | If `spark.akka.frameSize` > 2047, it will overflow and become negative. Should have some assertion in `maxFrameSizeBytes` to warn people. Author: zsxwing <zsxwing@gmail.com> Closes #3527 from zsxwing/SPARK-4664 and squashes the following commits: 0089c7a [zsxwing] Throw an exception when spark.akka.frameSize > 2047
* SPARK-2192 [BUILD] Examples Data Not in Binary DistributionSean Owen2014-12-011-0/+3
| | | | | | | | | | Simply, add data/ to distributions. This adds about 291KB (compressed) to the tarball, FYI. Author: Sean Owen <sowen@cloudera.com> Closes #3480 from srowen/SPARK-2192 and squashes the following commits: 47688f1 [Sean Owen] Add data/ to distributions
* Fix wrong file name pattern in .gitignoreKousuke Saruta2014-12-011-1/+1
| | | | | | | | | | | | In .gitignore, there is an entry for spark-*-bin.tar.gz but considering make-distribution.sh, the name pattern should be spark-*-bin-*.tgz. This change is really small so I don't open issue in JIRA. If it's needed, please let me know. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3529 from sarutak/fix-wrong-tgz-pattern and squashes the following commits: de3c70a [Kousuke Saruta] Fixed wrong file name pattern in .gitignore
* [SPARK-4632] version updatePrabeesh K2014-11-301-2/+2
| | | | | | | | | | | | | | Author: Prabeesh K <prabsmails@gmail.com> Closes #3495 from prabeesh/master and squashes the following commits: ab03d50 [Prabeesh K] Update pom.xml 8c6437e [Prabeesh K] Revert e10b40a [Prabeesh K] version update dbac9eb [Prabeesh K] Revert ec0b1c3 [Prabeesh K] [SPARK-4632] version update a835505 [Prabeesh K] [SPARK-4632] version update 831391b [Prabeesh K] [SPARK-4632] version update
* MAINTENANCE: Automated closing of pull requests.Patrick Wendell2014-11-300-0/+0
| | | | | | | | This commit exists to close the following pull requests on Github: Closes #2915 (close requested by 'JoshRosen') Closes #3140 (close requested by 'JoshRosen') Closes #3366 (close requested by 'JoshRosen')
* [DOC] Fixes formatting typo in SQL programming guideCheng Lian2014-11-301-2/+0
| | | | | | | | | | | | <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3498) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3498 from liancheng/fix-sql-doc-typo and squashes the following commits: 865ecd7 [Cheng Lian] Fixes formatting typo in SQL programming guide
* [SPARK-4656][Doc] Typo in Programming Guide markdownlewuathe2014-11-301-1/+1
| | | | | | | | | | Grammatical error in Programming Guide document Author: lewuathe <lewuathe@me.com> Closes #3412 from Lewuathe/typo-programming-guide and squashes the following commits: a3e2f00 [lewuathe] Typo in Programming Guide markdown
* [SPARK-4623]Add the some error infomation if using spark-sql in yarn-cluster ↵carlmartin2014-11-302-0/+11
| | | | | | | | | | | | | | | | | | | mode If using spark-sql in yarn-cluster mode, print an error infomation just as the spark shell in yarn-cluster mode. Author: carlmartin <carlmartinmax@gmail.com> Author: huangzhaowei <carlmartinmax@gmail.com> Closes #3479 from SaintBacchus/sparkSqlShell and squashes the following commits: 35829a9 [carlmartin] improve the description of comment e6c1eb7 [carlmartin] add a comment in bin/spark-sql to remind user who wants to change the class f1c5c8d [carlmartin] Merge branch 'master' into sparkSqlShell 8e112c5 [huangzhaowei] singular form ec957bc [carlmartin] Add the some error infomation if using spark-sql in yarn-cluster mode 7bcecc2 [carlmartin] Merge branch 'master' of https://github.com/apache/spark into codereview 4fad75a [carlmartin] Add the Error infomation using spark-sql in yarn-cluster mode
* SPARK-2143 [WEB UI] Add Spark version to UI footerSean Owen2014-11-301-0/+10
| | | | | | | | | | | | This PR adds the Spark version number to the UI footer; this is how it looks: ![screen shot 2014-11-21 at 22 58 40](https://cloud.githubusercontent.com/assets/822522/5157738/f4822094-7316-11e4-98f1-333a535fdcfa.png) Author: Sean Owen <sowen@cloudera.com> Closes #3410 from srowen/SPARK-2143 and squashes the following commits: e9b3a7a [Sean Owen] Add Spark version to footer
* [DOCS][BUILD] Add instruction to use change-version-to-2.11.sh in 'Building ↵Takuya UESHIN2014-11-301-0/+1
| | | | | | | | | | | | for Scala 2.11'. To build with Scala 2.11, we have to execute `change-version-to-2.11.sh` before Maven execute, otherwise inter-module dependencies are broken. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3361 from ueshin/docs/building-spark_2.11 and squashes the following commits: 1d29126 [Takuya UESHIN] Add instruction to use change-version-to-2.11.sh in 'Building for Scala 2.11'.
* SPARK-4507: PR merge script should support closing multiple JIRA ticketsTakayuki Hasegawa2014-11-291-7/+11
| | | | | | | | | | | | | | This will fix SPARK-4507. For pull requests that reference multiple JIRAs in their titles, it would be helpful if the PR merge script offered to close all of them. Author: Takayuki Hasegawa <takayuki.hasegawa0311@gmail.com> Closes #3428 from hase1031/SPARK-4507 and squashes the following commits: bf6d64b [Takayuki Hasegawa] SPARK-4507: try to resolve issue when no JIRAs in title 401224c [Takayuki Hasegawa] SPARK-4507: moved codes as before ce89021 [Takayuki Hasegawa] SPARK-4507: PR merge script should support closing multiple JIRA tickets
* [SPARK-4505][Core] Add a ClassTag parameter to CompactBuffer[T]zsxwing2014-11-291-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added a ClassTag parameter to CompactBuffer. So CompactBuffer[T] can create primitive arrays for primitive types. It will reduce the memory usage for primitive types significantly and only pay minor performance lost. Here is my test code: ```Scala // Call org.apache.spark.util.SizeEstimator.estimate def estimateSize(obj: AnyRef): Long = { val c = Class.forName("org.apache.spark.util.SizeEstimator$") val f = c.getField("MODULE$") val o = f.get(c) val m = c.getMethod("estimate", classOf[Object]) m.setAccessible(true) m.invoke(o, obj).asInstanceOf[Long] } sc.parallelize(1 to 10000).groupBy(_ => 1).foreach { case (k, v) => println(v.getClass() + " size: " + estimateSize(v)) } ``` Using the previous CompactBuffer outputed ``` class org.apache.spark.util.collection.CompactBuffer size: 313358 ``` Using the new CompactBuffer outputed ``` class org.apache.spark.util.collection.CompactBuffer size: 65712 ``` In this case, the new `CompactBuffer` only used 20% memory of the previous one. It's really helpful for `groupByKey` when using a primitive value. Author: zsxwing <zsxwing@gmail.com> Closes #3378 from zsxwing/SPARK-4505 and squashes the following commits: 4abdbba [zsxwing] Add a ClassTag parameter to reduce the memory usage of CompactBuffer[T] when T is a primitive type
* [SPARK-4057] Use -agentlib instead of -Xdebug in sbt-launch-lib.bash for ↵Kousuke Saruta2014-11-291-1/+1
| | | | | | | | | | | | | debugging In -launch-lib.bash, -Xdebug option is used for debugging. We should use -agentlib option for Java 6+. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #2904 from sarutak/SPARK-4057 and squashes the following commits: 39b5320 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-4057 26b4af8 [Kousuke Saruta] Improved java option for debugging
* Include the key name when failing on an invalid value.Stephen Haberman2014-11-291-1/+1
| | | | | | | | | | Admittedly a really small tweak. Author: Stephen Haberman <stephen@exigencecorp.com> Closes #3514 from stephenh/include-key-name-in-npe and squashes the following commits: 937740a [Stephen Haberman] Include the key name when failing on an invalid value.
* [SPARK-3398] [SPARK-4325] [EC2] Use EC2 status checks.Nicholas Chammas2014-11-291-12/+36
| | | | | | | | | | | | | | | This PR re-introduces [0e648bc](https://github.com/apache/spark/commit/0e648bc2bedcbeb55fce5efac04f6dbad9f063b4) from PR #2339, which somehow never made it into the codebase. Additionally, it removes a now-unnecessary linear backoff on the SSH checks since we are blocking on EC2 status checks before testing SSH. Author: Nicholas Chammas <nicholas.chammas@gmail.com> Closes #3195 from nchammas/remove-ec2-ssh-backoff and squashes the following commits: efb29e1 [Nicholas Chammas] Revert "Remove linear backoff." ef3ca99 [Nicholas Chammas] reuse conn adb4eaa [Nicholas Chammas] Remove linear backoff. 55caa24 [Nicholas Chammas] Check EC2 status checks before SSH.
* MAINTENANCE: Automated closing of pull requests.Patrick Wendell2014-11-290-0/+0
| | | | | | | | This commit exists to close the following pull requests on Github: Closes #3451 (close requested by 'pwendell') Closes #1310 (close requested by 'pwendell') Closes #3207 (close requested by 'JoshRosen')
* [SPARK-4597] Use proper exception and reset variable in Utils.createTempDir()Liang-Chi Hsieh2014-11-281-1/+1
| | | | | | | | | | `File.exists()` and `File.mkdirs()` only throw `SecurityException` instead of `IOException`. Then, when an exception is thrown, `dir` should be reset too. Author: Liang-Chi Hsieh <viirya@gmail.com> Closes #3449 from viirya/fix_createtempdir and squashes the following commits: 36cacbd [Liang-Chi Hsieh] Use proper exception and reset variable.
* SPARK-1450 [EC2] Specify the default zone in the EC2 script helpSean Owen2014-11-281-1/+1
| | | | | | | | | | | | | | | This looks like a one-liner, so I took a shot at it. There can be no fixed default availability zone since the names are different per region. But the default behavior can be documented: ``` if opts.zone == "": opts.zone = random.choice(conn.get_all_zones()).name ``` Author: Sean Owen <sowen@cloudera.com> Closes #3454 from srowen/SPARK-1450 and squashes the following commits: 9193cf3 [Sean Owen] Document that --zone defaults to a single random zone
* [SPARK-4584] [yarn] Remove security manager from Yarn AM.Marcelo Vanzin2014-11-281-46/+14
| | | | | | | | | | | | | | | | | | | | | | | | The security manager adds a lot of overhead to the runtime of the app, and causes a severe performance regression. Even stubbing out all unneeded methods (all except checkExit()) does not help. So, instead, penalize users who do an explicit System.exit() by leaving them in "undefined behavior" territory: if they do that, the Yarn backend won't be able to report the final app status to the RM. The result is that the final status of the application might not match the user's expectations. One side-effect of the change is that users who do an explicit System.exit() will lose the AM retry functionality. Since there is no way to know if the exit was because of success or failure, the AM right now errs on the side of it being a successful exit. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #3484 from vanzin/SPARK-4584 and squashes the following commits: 21f2502 [Marcelo Vanzin] Do not retry apps that use System.exit(). 4198b3b [Marcelo Vanzin] [SPARK-4584] [yarn] Remove security manager from Yarn AM.
* [SPARK-4193][BUILD] Disable doclint in Java 8 to prevent from build error.Takuya UESHIN2014-11-287-6/+35
| | | | | | | | | | | | | | Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3058 from ueshin/issues/SPARK-4193 and squashes the following commits: e096bb1 [Takuya UESHIN] Add a plugin declaration to pluginManagement. 6762ec2 [Takuya UESHIN] Fix usage of -Xdoclint javadoc option. fdb280a [Takuya UESHIN] Fix Javadoc errors. 4745f3c [Takuya UESHIN] Merge branch 'master' into issues/SPARK-4193 923e2f0 [Takuya UESHIN] Use doclint option `-missing` instead of `none`. 30d6718 [Takuya UESHIN] Fix Javadoc errors. b548017 [Takuya UESHIN] Disable doclint in Java 8 to prevent from build error.
* [SPARK-4643] [Build] Remove unneeded staging repositories from buildDaoyuan Wang2014-11-281-24/+0
| | | | | | | | | | | The old location will return a 404. Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3504 from adrian-wang/repo and squashes the following commits: f604e05 [Daoyuan Wang] already in maven central, remove at all f494fac [Daoyuan Wang] spark staging repo outdated
* Delete unnecessary functionKaiXinXiaoLei2014-11-281-7/+0
| | | | | | | | | | | | | | when building spark by sbt, the function “runAlternateBoot" in sbt/sbt-launch-lib.bash is not used. And this function is not used by spark code. So I think this function is not necessary. And the option of "sbt.boot.properties" can be configured in the command line when building spark, eg: sbt/sbt assembly -Dsbt.boot.properties=$bootpropsfile. The file from https://github.com/sbt/sbt-launcher-package is changed. And the function “runAlternateBoot" is deleted in upstream project. I think spark project should delete this function in file sbt/sbt-launch-lib.bash. Thanks. Author: KaiXinXiaoLei <huleilei1@huawei.com> Closes #3224 from KaiXinXiaoLei/deleteFunction and squashes the following commits: e8eac49 [KaiXinXiaoLei] Delete blank lines. efe36d4 [KaiXinXiaoLei] Delete unnecessary function
* [SPARK-4645][SQL] Disables asynchronous execution in Hive 0.13.1 ↵Cheng Lian2014-11-281-100/+39
| | | | | | | | | | | | | | | | HiveThriftServer2 This PR disables HiveThriftServer2 asynchronous execution by setting `runInBackground` argument in `ExecuteStatementOperation` to `false`, and reverting `SparkExecuteStatementOperation.run` in Hive 13 shim to Hive 12 version. This change makes Simba ODBC driver v1.0.0.1000 work. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3506) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3506 from liancheng/disable-async-exec and squashes the following commits: 593804d [Cheng Lian] Disables asynchronous execution in Hive 0.13.1 HiveThriftServer2
* [SPARK-4619][Storage]delete redundant time suffixmaji20142014-11-281-1/+1
| | | | | | | | | | Time suffix exists in Utils.getUsedTimeMs(startTime), no need to append again, delete that Author: maji2014 <maji3@asiainfo.com> Closes #3475 from maji2014/SPARK-4619 and squashes the following commits: df0da4e [maji2014] delete redundant time suffix
* [SPARK-4613][Core] Java API for JdbcRDDCheng Lian2014-11-273-5/+204
| | | | | | | | | | | | | | | | | | | This PR introduces a set of Java APIs for using `JdbcRDD`: 1. Trait (interface) `JdbcRDD.ConnectionFactory`: equivalent to the `getConnection: () => Connection` parameter in `JdbcRDD` constructor. 2. Two overloaded versions of `Jdbc.create`: used to create `JavaRDD` that wraps a `JdbcRDD`. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3478) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3478 from liancheng/japi-jdbc-rdd and squashes the following commits: 9a54625 [Cheng Lian] Only shutdowns a single DB rather than the whole Derby driver d4cedc5 [Cheng Lian] Moves Java JdbcRDD test case to a separate test suite ffcdf2e [Cheng Lian] Java API for JdbcRDD
* [SPARK-4626] Kill a task only if the executorId is (still) registered with ↵roxchkplusony2014-11-271-1/+7
| | | | | | | | | | | the scheduler Author: roxchkplusony <roxchkplusony@gmail.com> Closes #3483 from roxchkplusony/bugfix/4626 and squashes the following commits: aba9184 [roxchkplusony] replace warning message per review 5e7fdea [roxchkplusony] [SPARK-4626] Kill a task only if the executorId is (still) registered with the scheduler
* SPARK-4170 [CORE] Closure problems when running Scala app that "extends App"Sean Owen2014-11-273-34/+44
| | | | | | | | | | | Warn against subclassing scala.App, and remove one instance of this in examples Author: Sean Owen <sowen@cloudera.com> Closes #3497 from srowen/SPARK-4170 and squashes the following commits: 4a6131f [Sean Owen] Restore multiline string formatting a8ca895 [Sean Owen] Warn against subclassing scala.App, and remove one instance of this in examples
* [Release] Automate generation of contributors listAndrew Or2014-11-262-0/+330
| | | | | | This commit provides a script that computes the contributors list by linking the github commits with JIRA issues. Automatically translating github usernames remains a TODO at this point.
* [SPARK-732][SPARK-3628][CORE][RESUBMIT] eliminate duplicate update on accmulatorCodingCat2014-11-264-30/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://issues.apache.org/jira/browse/SPARK-3628 In current implementation, the accumulator will be updated for every successfully finished task, even the task is from a resubmitted stage, which makes the accumulator counter-intuitive In this patch, I changed the way for the DAGScheduler to update the accumulator, DAGScheduler maintains a HashTable, mapping the stage id to the received <accumulator_id , value> pairs. Only when the stage becomes independent, (no job needs it any more), we accumulate the values of the <accumulator_id , value> pairs, when a task finished, we check if the HashTable has contained such stageId, it saves the accumulator_id, value only when the task is the first finished task of a new stage or the stage is running for the first attempt... Author: CodingCat <zhunansjtu@gmail.com> Closes #2524 from CodingCat/SPARK-732-1 and squashes the following commits: 701a1e8 [CodingCat] roll back change on Accumulator.scala 1433e6f [CodingCat] make MIMA happy b233737 [CodingCat] address Matei's comments 02261b8 [CodingCat] rollback some changes 6b0aff9 [CodingCat] update document 2b2e8cf [CodingCat] updateAccumulator 83b75f8 [CodingCat] style fix 84570d2 [CodingCat] re-enable the bad accumulator guard 1e9e14d [CodingCat] add NPE guard 21b6840 [CodingCat] simplify the patch 88d1f03 [CodingCat] fix rebase error f74266b [CodingCat] add test case for resubmitted result stage 5cf586f [CodingCat] de-duplicate on task level 138f9b3 [CodingCat] make MIMA happy 67593d2 [CodingCat] make if allowing duplicate update as an option of accumulator
* [SPARK-4614][MLLIB] Slight API changes in Matrix and MatricesXiangrui Meng2014-11-263-11/+65
| | | | | | | | | | | | | Before we have a full picture of the operators we want to add, it might be safer to hide `Matrix.transposeMultiply` in 1.2.0. Another update we want to change is `Matrix.randn` and `Matrix.rand`, both of which should take a `Random` implementation. Otherwise, it is very likely to produce inconsistent RDDs. I also added some unit tests for matrix factory methods. All APIs are new in 1.2, so there is no incompatible changes. brkyvz Author: Xiangrui Meng <meng@databricks.com> Closes #3468 from mengxr/SPARK-4614 and squashes the following commits: 3b0e4e2 [Xiangrui Meng] add mima excludes 6bfd8a4 [Xiangrui Meng] hide transposeMultiply; add rng to rand and randn; add unit tests
* Removing confusing TripletFieldsJoseph E. Gonzalez2014-11-264-33/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After additional discussion with rxin, I think having all the possible `TripletField` options is confusing. This pull request reduces the triplet fields to: ```java /** * None of the triplet fields are exposed. */ public static final TripletFields None = new TripletFields(false, false, false); /** * Expose only the edge field and not the source or destination field. */ public static final TripletFields EdgeOnly = new TripletFields(false, false, true); /** * Expose the source and edge fields but not the destination field. (Same as Src) */ public static final TripletFields Src = new TripletFields(true, false, true); /** * Expose the destination and edge fields but not the source field. (Same as Dst) */ public static final TripletFields Dst = new TripletFields(false, true, true); /** * Expose all the fields (source, edge, and destination). */ public static final TripletFields All = new TripletFields(true, true, true); ``` Author: Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com> Closes #3472 from jegonzal/SimplifyTripletFields and squashes the following commits: 91796b5 [Joseph E. Gonzalez] removing confusing triplet fields
* [SPARK-4612] Reduce task latency and increase scheduling throughput by ↵Tathagata Das2014-11-251-1/+1
| | | | | | | | | | | | | | | making configuration initialization lazy https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L337 creates a configuration object for every task that is launched, even if there is no new dependent file/JAR to update. This is a heavy-weight creation that should be avoided if there is no new file/JAR to update. This PR makes that creation lazy. Quick local test in spark-perf scheduling throughput tests gives the following numbers in a local standalone scheduler mode. 1 job with 10000 tasks: before 7.8395 seconds, after 2.6415 seconds = 3x increase in task scheduling throughput pwendell JoshRosen Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #3463 from tdas/lazy-config and squashes the following commits: c791c1e [Tathagata Das] Reduce task latency by making configuration initialization lazy
* [SPARK-4516] Avoid allocating Netty PooledByteBufAllocators unnecessarilyAaron Davidson2014-11-262-10/+8
| | | | | | | | | | | | Turns out we are allocating an allocator pool for every TransportClient (which means that the number increases with the number of nodes in the cluster), when really we should just reuse one for all clients. This patch, as expected, greatly decreases off-heap memory allocation, and appears to make allocation only proportional to the number of cores. Author: Aaron Davidson <aaron@databricks.com> Closes #3465 from aarondav/fewer-pools and squashes the following commits: 36c49da [Aaron Davidson] [SPARK-4516] Avoid allocating unnecessarily Netty PooledByteBufAllocators
* [SPARK-4516] Cap default number of Netty threads at 8Aaron Davidson2014-11-251-7/+37
| | | | | | | | | | | | In practice, only 2-4 cores should be required to transfer roughly 10 Gb/s, and each core that we use will have an initial overhead of roughly 32 MB of off-heap memory, which comes at a premium. Thus, this value should still retain maximum throughput and reduce wasted off-heap memory allocation. It can be overridden by setting the number of serverThreads and clientThreads manually in Spark's configuration. Author: Aaron Davidson <aaron@databricks.com> Closes #3469 from aarondav/fewer-pools2 and squashes the following commits: 087c59f [Aaron Davidson] [SPARK-4516] Cap default number of Netty threads at 8
* [SPARK-4604][MLLIB] make MatrixFactorizationModel publicXiangrui Meng2014-11-252-3/+81
| | | | | | | | | | | | User could construct an MF model directly. I added a note about the performance. Author: Xiangrui Meng <meng@databricks.com> Closes #3459 from mengxr/SPARK-4604 and squashes the following commits: f64bcd3 [Xiangrui Meng] organize imports ed08214 [Xiangrui Meng] check preconditions and unit tests a624c12 [Xiangrui Meng] make MatrixFactorizationModel public
* [HOTFIX]: Adding back without-hive distPatrick Wendell2014-11-251-0/+1
|
* [SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updatesJoseph K. Bradley2014-11-256-72/+146
| | | | | | | | | | | | | | | | | | | | | | | Currently, the LogLoss used by GradientBoostedTrees has 2 issues: * the gradient (and therefore loss) does not match that used by Friedman (1999) * the error computation uses 0/1 accuracy, not log loss This PR updates LogLoss. It also adds some doc for boosting and forests. I tested it on sample data and made sure the log loss is monotonically decreasing with each boosting iteration. CC: mengxr manishamde codedeft Author: Joseph K. Bradley <joseph@databricks.com> Closes #3439 from jkbradley/gbt-loss-fix and squashes the following commits: cfec17e [Joseph K. Bradley] removed forgotten temp comments a27eb6d [Joseph K. Bradley] corrections to last log loss commit ed5da2c [Joseph K. Bradley] updated LogLoss (boosting) for numerical stability 5e52bff [Joseph K. Bradley] * Removed the 1/2 from SquaredError. This also required updating the test suite since it effectively doubles the gradient and loss. * Added doc for developers within RandomForest. * Small cleanup in test suite (generating data only once) e57897a [Joseph K. Bradley] Fixed LogLoss for GradientBoostedTrees, and updated doc for losses, forests, and boosting