aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-4422][MLLIB]In some cases, Vectors.fromBreeze get wrong results.GuoQiang Li2014-11-162-1/+8
| | | | | | | | | | | | | | | cc mengxr Author: GuoQiang Li <witgo@qq.com> Closes #3281 from witgo/SPARK-4422 and squashes the following commits: 5f1fa5e [GuoQiang Li] import order 50783bd [GuoQiang Li] review commits 7a10123 [GuoQiang Li] In some cases, Vectors.fromBreeze get wrong results. (cherry picked from commit 5168c6ca9f0008027d688661bae57c28cf386b54) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-1730-61/+61
|
* Preparing Spark release v1.2.0-snapshot0Ubuntu2014-11-1730-30/+30
|
* Revert "[SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, ↵Michael Armbrust2014-11-164-142/+115
| | | | | | | | | | | | | and fixes for complex types" Author: Michael Armbrust <michael@databricks.com> Closes #3292 from marmbrus/revert4309 and squashes the following commits: 808e96e [Michael Armbrust] Revert "[SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes for complex types" (cherry picked from commit 45ce3273cb618d14ec4d20c4c95699634b951086) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes ↵Cheng Lian2014-11-164-115/+142
| | | | | | | | | | | | | | | | | | | | | | for complex types SPARK-4407 was detected while working on SPARK-4309. Merged these two into a single PR since 1.2.0 RC is approaching. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3178) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3178 from liancheng/date-for-thriftserver and squashes the following commits: 6f71d0b [Cheng Lian] Makes toHiveString static 26fa955 [Cheng Lian] Fixes complex type support in Hive 0.13.1 shim a92882a [Cheng Lian] Updates HiveShim for 0.13.1 73f442b [Cheng Lian] Adds Date support for HiveThriftServer2 (Hive 0.12.0) (cherry picked from commit cb6bd83a91d9b4a227dc6467255231869c1820e2) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4393] Fix memory leak in ConnectionManager ACK timeout TimerTasks; ↵Josh Rosen2014-11-161-12/+35
| | | | | | | | | | | | | | | | | | | | | | | use HashedWheelTimer This patch is intended to fix a subtle memory leak in ConnectionManager's ACK timeout TimerTasks: in the old code, each TimerTask held a reference to the message being sent and a cancelled TimerTask won't necessarily be garbage-collected until it's scheduled to run, so this caused huge buildups of messages that weren't garbage collected until their timeouts expired, leading to OOMs. This patch addresses this problem by capturing only the message ID in the TimerTask instead of the whole message, and by keeping a WeakReference to the promise in the TimerTask. I've also modified this code to use Netty's HashedWheelTimer, whose performance characteristics should be better for this use-case. Thanks to cristianopris for narrowing down this issue! Author: Josh Rosen <joshrosen@databricks.com> Closes #3259 from JoshRosen/connection-manager-timeout-bugfix and squashes the following commits: afcc8d6 [Josh Rosen] Address rxin's review feedback. 2a2e92d [Josh Rosen] Keep only WeakReference to promise in TimerTask; 0f0913b [Josh Rosen] Spelling fix: timout => timeout 3200c33 [Josh Rosen] Use Netty HashedWheelTimer f847dd4 [Josh Rosen] Don't capture entire message in ACK timeout task. (cherry picked from commit 7850e0c707affd5eafd570fb43716753396cf479) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4426][SQL][Minor] The symbol of BitwiseOr is wrong, should not be '&'Kousuke Saruta2014-11-151-1/+1
| | | | | | | | | | | | | The symbol of BitwiseOr is defined as '&' but I think it's wrong. It should be '|'. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3284 from sarutak/bitwise-or-symbol-fix and squashes the following commits: aff4be5 [Kousuke Saruta] Fixed symbol of BitwiseOr (cherry picked from commit 84468b2e2031d646dcf035cb18947170ba326ccd) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4419] Upgrade snappy-java to 1.1.1.6Josh Rosen2014-11-151-1/+1
| | | | | | | | | | | | | | | This upgrades snappy-java to 1.1.1.6, which includes a patch that improves error messages when attempting to deserialize empty inputs using SnappyInputStream (see xerial/snappy-java#89). We previously tried up upgrade to 1.1.1.5 in #2911 but reverted that patch after discovering a memory leak in snappy-java. This should leak have been fixed in 1.1.1.6, though (see xerial/snappy-java#92). Author: Josh Rosen <joshrosen@databricks.com> Closes #3287 from JoshRosen/SPARK-4419 and squashes the following commits: 5d6f4cc [Josh Rosen] [SPARK-4419] Upgrade snappy-java to 1.1.1.6. (cherry picked from commit 7d8e152eecc7e822b7b1e40b791267a8911e01cf) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-2321] Several progress API improvements / refactoringsJosh Rosen2014-11-147-172/+269
| | | | | | | | | | | | | | | | | | | | | | | This PR refactors / extends the status API introduced in #2696. - Change StatusAPI from a mixin trait to a class. Before, the new status API methods were directly accessible through SparkContext, whereas now they're accessed through a `sc.statusAPI` field. As long as we were going to add these methods directly to SparkContext, the mixin trait seemed like a good idea, but this might be simpler to reason about and may avoid pitfalls that I've run into while attempting to refactor other parts of SparkContext to use mixins (see #3071, for example). - Change the name from SparkStatusAPI to SparkStatusTracker. - Make `getJobIdsForGroup(null)` return ids for jobs that aren't associated with any job group. - Add `getActiveStageIds()` and `getActiveJobIds()` methods that return the ids of whatever's currently active in this SparkContext. This should simplify davies's progress bar code. Author: Josh Rosen <joshrosen@databricks.com> Closes #3197 from JoshRosen/progress-api-improvements and squashes the following commits: 30b0afa [Josh Rosen] Rename SparkStatusAPI to SparkStatusTracker. d1b08d8 [Josh Rosen] Add missing newlines 2cc7353 [Josh Rosen] Add missing file. d5eab1f [Josh Rosen] Add getActive[Stage|Job]Ids() methods. a227984 [Josh Rosen] getJobIdsForGroup(null) should return jobs for default group c47e294 [Josh Rosen] Remove StatusAPI mixin trait. (cherry picked from commit 40eb8b6ef3a67e36d0d9492c044981a1da76351d) Signed-off-by: Reynold Xin <rxin@databricks.com>
* Added contains(key) to Metadatakai2014-11-142-0/+16
| | | | | | | | | | | | | | Add contains(key) to org.apache.spark.sql.catalyst.util.Metadata to test the existence of a key. Otherwise, Class Metadata's get methods may throw NoSuchElement exception if the key does not exist. Testcases are added to MetadataSuite as well. Author: kai <kaizeng@eecs.berkeley.edu> Closes #3273 from kai-zeng/metadata-fix and squashes the following commits: 74b3d03 [kai] Added contains(key) to Metadata (cherry picked from commit cbddac23696d89b672dce380cc7360a873e27b3b) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4260] Httpbroadcast should set connection timeout.Kousuke Saruta2014-11-141-0/+2
| | | | | | | | | | | | | Httpbroadcast sets read timeout but doesn't set connection timeout. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3122 from sarutak/httpbroadcast-timeout and squashes the following commits: c7f3a56 [Kousuke Saruta] Added Connection timeout for Http Connection to HttpBroadcast.scala (cherry picked from commit 60969b0336930449a826821a48f83f65337e8856) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4363][Doc] Update the Broadcast examplezsxwing2014-11-142-2/+2
| | | | | | | | | | | Author: zsxwing <zsxwing@gmail.com> Closes #3226 from zsxwing/SPARK-4363 and squashes the following commits: 8109914 [zsxwing] Update the Broadcast example (cherry picked from commit 861223ee5bea8e434a9ebb0d53f436ce23809f9c) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4379][Core] Change Exception to SparkException in checkpointzsxwing2014-11-141-1/+1
| | | | | | | | | | | | | It's better to change to SparkException. However, it's a breaking change since it will change the exception type. Author: zsxwing <zsxwing@gmail.com> Closes #3241 from zsxwing/SPARK-4379 and squashes the following commits: 409f3af [zsxwing] Change Exception to SparkException in checkpoint (cherry picked from commit dba14058230194122a715c219e35ab8eaa786321) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4415] [PySpark] JVM should exit after Python exitDavies Liu2014-11-144-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When JVM is started in a Python process, it should exit once the stdin is closed. test: add spark.driver.memory in conf/spark-defaults.conf ``` daviesdm:~/work/spark$ cat conf/spark-defaults.conf spark.driver.memory 8g daviesdm:~/work/spark$ bin/pyspark >>> quit daviesdm:~/work/spark$ jps 4931 Jps 286 daviesdm:~/work/spark$ python wc.py 943738 0.719928026199 daviesdm:~/work/spark$ jps 286 4990 Jps ``` Author: Davies Liu <davies@databricks.com> Closes #3274 from davies/exit and squashes the following commits: df0e524 [Davies Liu] address comments ce8599c [Davies Liu] address comments 050651f [Davies Liu] JVM should exit after Python exit (cherry picked from commit 7fe08b43c78bf9e8515f671e72aa03a83ea782f8) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4404]SparkSubmitDriverBootstrapper should stop after its SparkSubmit ↵WangTao2014-11-141-0/+10
| | | | | | | | | | | | | | | | | | | | | | sub-proc... ...ess ends https://issues.apache.org/jira/browse/SPARK-4404 When we have spark.driver.extra* or spark.driver.memory in SPARK_SUBMIT_PROPERTIES_FILE, spark-class will use SparkSubmitDriverBootstrapper to launch driver. If we get process id of SparkSubmitDriverBootstrapper and wanna kill it during its running, we expect its SparkSubmit sub-process stop also. Author: WangTao <barneystinson@aliyun.com> Author: WangTaoTheTonic <barneystinson@aliyun.com> Closes #3266 from WangTaoTheTonic/killsubmit and squashes the following commits: e03eba5 [WangTaoTheTonic] add comments 57b5ca1 [WangTao] SparkSubmitDriverBootstrapper should stop after its SparkSubmit sub-process ends (cherry picked from commit 303a4e4d23e5cd93b541480cf88d5badb9cf9622) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-4214. With dynamic allocation, avoid outstanding requests for more...Sandy Ryza2014-11-142-9/+94
| | | | | | | | | | | | | | | | | | | ... executors than pending tasks need. WIP. Still need to add and fix tests. Author: Sandy Ryza <sandy@cloudera.com> Closes #3204 from sryza/sandy-spark-4214 and squashes the following commits: 35cf0e0 [Sandy Ryza] Add comment 13b53df [Sandy Ryza] Review feedback 067465f [Sandy Ryza] Whitespace fix 6ae080c [Sandy Ryza] Add tests and get num pending tasks from ExecutorAllocationListener 531e2b6 [Sandy Ryza] SPARK-4214. With dynamic allocation, avoid outstanding requests for more executors than pending tasks need. (cherry picked from commit ad42b283246b93654c5fd731cd618fee74d8c4da) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4412][SQL] Fix Spark's control of Parquet logging.Jim Carroll2014-11-141-0/+15
| | | | | | | | | | | | | | | | | The Spark ParquetRelation.scala code makes the assumption that the parquet.Log class has already been loaded. If ParquetRelation.enableLogForwarding executes prior to the parquet.Log class being loaded then the code in enableLogForwarding has no affect. ParquetRelation.scala attempts to override the parquet logger but, at least currently (and if your application simply reads a parquet file before it does anything else with Parquet), the parquet.Log class hasn't been loaded yet. Therefore the code in ParquetRelation.enableLogForwarding has no affect. If you look at the code in parquet.Log there's a static initializer that needs to be called prior to enableLogForwarding or whatever enableLogForwarding does gets undone by this static initializer. The "fix" would be to force the static initializer to get called in parquet.Log as part of enableForwardLogging. Author: Jim Carroll <jim@dontcallme.com> Closes #3271 from jimfcarroll/parquet-logging and squashes the following commits: 37bdff7 [Jim Carroll] Fix Spark's control of Parquet logging. (cherry picked from commit 37482ce5a7b875f17d32a5e8c561cc8e9772c9b3) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4365][SQL] Remove unnecessary filter call on records returned from ↵Yash Datta2014-11-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | parquet library Since parquet library has been updated , we no longer need to filter the records returned from parquet library for null records , as now the library skips those : from parquet-hadoop/src/main/java/parquet/hadoop/InternalParquetRecordReader.java public boolean nextKeyValue() throws IOException, InterruptedException { boolean recordFound = false; while (!recordFound) { // no more records left if (current >= total) { return false; } try { checkRead(); currentValue = recordReader.read(); current ++; if (recordReader.shouldSkipCurrentRecord()) { // this record is being filtered via the filter2 package if (DEBUG) LOG.debug("skipping record"); continue; } if (currentValue == null) { // only happens with FilteredRecordReader at end of block current = totalCountLoadedSoFar; if (DEBUG) LOG.debug("filtered record reader reached end of block"); continue; } recordFound = true; if (DEBUG) LOG.debug("read value: " + currentValue); } catch (RuntimeException e) { throw new ParquetDecodingException(format("Can not read value at %d in block %d in file %s", current, currentBlock, file), e); } } return true; } Author: Yash Datta <Yash.Datta@guavus.com> Closes #3229 from saucam/remove_filter and squashes the following commits: 8909ae9 [Yash Datta] SPARK-4365: Remove unnecessary filter call on records returned from parquet library (cherry picked from commit 63ca3af66f9680fd12adee82fb4d342caae5cea4) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4386] Improve performance when writing Parquet files.Jim Carroll2014-11-141-6/+8
| | | | | | | | | | | | | | | If you profile the writing of a Parquet file, the single worst time consuming call inside of org.apache.spark.sql.parquet.MutableRowWriteSupport.write is actually in the scala.collection.AbstractSequence.size call. This is because the size call actually ends up COUNTING the elements in a scala.collection.LinearSeqOptimized.length ("optimized?"). This doesn't need to be done. "size" is called repeatedly where needed rather than called once at the top of the method and stored in a 'val'. Author: Jim Carroll <jim@dontcallme.com> Closes #3254 from jimfcarroll/parquet-perf and squashes the following commits: 30cc0b5 [Jim Carroll] Improve performance when writing Parquet files. (cherry picked from commit f76b9683706232c3d4e8e6e61627b8188dcb79dc) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4322][SQL] Enables struct fields as sub expressions of grouping fieldsCheng Lian2014-11-143-20/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While resolving struct fields, the resulted `GetField` expression is wrapped with an `Alias` to make it a named expression. Assume `a` is a struct instance with a field `b`, then `"a.b"` will be resolved as `Alias(GetField(a, "b"), "b")`. Thus, for this following SQL query: ```sql SELECT a.b + 1 FROM t GROUP BY a.b + 1 ``` the grouping expression is ```scala Add(GetField(a, "b"), Literal(1, IntegerType)) ``` while the aggregation expression is ```scala Add(Alias(GetField(a, "b"), "b"), Literal(1, IntegerType)) ``` This mismatch makes the above SQL query fail during the both analysis and execution phases. This PR fixes this issue by removing the alias when substituting aggregation expressions. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3248) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3248 from liancheng/spark-4322 and squashes the following commits: 23a46ea [Cheng Lian] Code simplification dd20a79 [Cheng Lian] Should only trim aliases around `GetField`s 7f46532 [Cheng Lian] Enables struct fields as sub expressions of grouping fields (cherry picked from commit 0c7b66bd449093bb5d2dafaf91d54e63e601e320) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Don't shuffle code generated rowsMichael Armbrust2014-11-142-2/+9
| | | | | | | | | | | | | When sort based shuffle and code gen are on we were trying to ship the code generated rows during a shuffle. This doesn't work because the classes don't exist on the other side. Instead we now copy into a generic row before shipping. Author: Michael Armbrust <michael@databricks.com> Closes #3263 from marmbrus/aggCodeGen and squashes the following commits: f6ba8cf [Michael Armbrust] fix and test (cherry picked from commit 4b4b50c9e596673c1534df97effad50d107a8007) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Minor cleanup of comments, errors and override.Michael Armbrust2014-11-143-10/+12
| | | | | | | | | | | | | | Author: Michael Armbrust <michael@databricks.com> Closes #3257 from marmbrus/minorCleanup and squashes the following commits: d8b5abc [Michael Armbrust] Use interpolation. 2fdf903 [Michael Armbrust] Better error message when coalesce can't be resolved. f9fa6cf [Michael Armbrust] Methods in a final class do not also need to be final, use override. 199fd98 [Michael Armbrust] Fix typo (cherry picked from commit f805025e8efe9cd522e8875141ec27df8d16bbe0) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4391][SQL] Configure parquet filters using SQLConfMichael Armbrust2014-11-145-11/+21
| | | | | | | | | | | | | | | | | This is more uniform with the rest of SQL configuration and allows it to be turned on and off without restarting the SparkContext. In this PR I also turn off filter pushdown by default due to a number of outstanding issues (in particular SPARK-4258). When those are fixed we should turn it back on by default. Author: Michael Armbrust <michael@databricks.com> Closes #3258 from marmbrus/parquetFilters and squashes the following commits: 5655bfe [Michael Armbrust] Remove extra line. 15e9a98 [Michael Armbrust] Enable filters for tests 75afd39 [Michael Armbrust] Fix comments 78fa02d [Michael Armbrust] off by default e7f9e16 [Michael Armbrust] First draft of correctly configuring parquet filter pushdown (cherry picked from commit e47c38763914aaf89a7a851c5f41b7549a75615b) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4390][SQL] Handle NaN cast to decimal correctlyMichael Armbrust2014-11-143-1/+9
| | | | | | | | | | | | Author: Michael Armbrust <michael@databricks.com> Closes #3256 from marmbrus/NanDecimal and squashes the following commits: 4c3ba46 [Michael Armbrust] fix style d360f83 [Michael Armbrust] Handle NaN cast to decimal (cherry picked from commit a0300ea32a9d92bd51c72930bc3979087b0082b2) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4062][Streaming]Add ReliableKafkaReceiver in Spark Streaming Kafka ↵jerryshao2014-11-1410-143/+651
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | connector Add ReliableKafkaReceiver in Kafka connector to prevent data loss if WAL in Spark Streaming is enabled. Details and design doc can be seen in [SPARK-4062](https://issues.apache.org/jira/browse/SPARK-4062). Author: jerryshao <saisai.shao@intel.com> Author: Tathagata Das <tathagata.das1565@gmail.com> Author: Saisai Shao <saisai.shao@intel.com> Closes #2991 from jerryshao/kafka-refactor and squashes the following commits: 5461f1c [Saisai Shao] Merge pull request #8 from tdas/kafka-refactor3 eae4ad6 [Tathagata Das] Refectored KafkaStreamSuiteBased to eliminate KafkaTestUtils and made Java more robust. fab14c7 [Tathagata Das] minor update. 149948b [Tathagata Das] Fixed mistake 14630aa [Tathagata Das] Minor updates. d9a452c [Tathagata Das] Minor updates. ec2e95e [Tathagata Das] Removed the receiver's locks and essentially reverted to Saisai's original design. 2a20a01 [jerryshao] Address some comments 9f636b3 [Saisai Shao] Merge pull request #5 from tdas/kafka-refactor b2b2f84 [Tathagata Das] Refactored Kafka receiver logic and Kafka testsuites e501b3c [jerryshao] Add Mima excludes b798535 [jerryshao] Fix the missed issue e5e21c1 [jerryshao] Change to while loop ea873e4 [jerryshao] Further address the comments 98f3d07 [jerryshao] Fix comment style 4854ee9 [jerryshao] Address all the comments 96c7a1d [jerryshao] Update the ReliableKafkaReceiver unit test 8135d31 [jerryshao] Fix flaky test a949741 [jerryshao] Address the comments 16bfe78 [jerryshao] Change the ordering of imports 0894aef [jerryshao] Add some comments 77c3e50 [jerryshao] Code refactor and add some unit tests dd9aeeb [jerryshao] Initial commit for reliable Kafka receiver (cherry picked from commit 5930f64bf0d2516304b21bd49eac361a54caabdd) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
* [SPARK-4333][SQL] Correctly log number of iterations in RuleExecutorDoingDone92014-11-141-1/+2
| | | | | | | | | | | | | | When iterator of RuleExecutor breaks, the num of iterator should be (iteration - 1) not (iteration ).Because log looks like "Fixed point reached for batch ${batch.name} after 3 iterations.", but it did 2 iterations really! Author: DoingDone9 <799203320@qq.com> Closes #3180 from DoingDone9/issue_01 and squashes the following commits: 571e2ed [DoingDone9] Update RuleExecutor.scala 46514b6 [DoingDone9] When iterator of RuleExecutor breaks, the num of iterator should be iteration - 1 not iteration. (cherry picked from commit 0cbdb01e1c817e71c4f80de05c4e5bb11510b368) Signed-off-by: Michael Armbrust <michael@databricks.com>
* SPARK-4375. no longer require -Pscala-2.10Sandy Ryza2014-11-147-171/+54
| | | | | | | | | | | | | | | It seems like the winds might have moved away from this approach, but wanted to post the PR anyway because I got it working and to show what it would look like. Author: Sandy Ryza <sandy@cloudera.com> Closes #3239 from sryza/sandy-spark-4375 and squashes the following commits: 0ffbe95 [Sandy Ryza] Enable -Dscala-2.11 in sbt cd42d94 [Sandy Ryza] Update doc f6644c3 [Sandy Ryza] SPARK-4375 take 2 (cherry picked from commit f5f757e4ed80759dc5668c63d5663651689f8da8) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4245][SQL] Fix containsNull of the result ArrayType of CreateArray ↵Takuya UESHIN2014-11-145-2/+106
| | | | | | | | | | | | | | | | | | | | | | expression. The `containsNull` of the result `ArrayType` of `CreateArray` should be `true` only if the children is empty or there exists nullable child. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3110 from ueshin/issues/SPARK-4245 and squashes the following commits: 6f64746 [Takuya UESHIN] Move equalsIgnoreNullability method into DataType. 5a90e02 [Takuya UESHIN] Refine InsertIntoHiveType and add some comments. cbecba8 [Takuya UESHIN] Fix a test title. 884ec37 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-4245 3c5274b [Takuya UESHIN] Add tests to insert data of types ArrayType / MapType / StructType with nullability is false into Hive table. 41a94a9 [Takuya UESHIN] Replace InsertIntoTable with InsertIntoHiveTable if data types ignoring nullability are same. 43e6ef5 [Takuya UESHIN] Fix containsNull for empty array. 778e997 [Takuya UESHIN] Fix containsNull of the result ArrayType of CreateArray expression. (cherry picked from commit bbd8f5bee81d5788c356977c173dd1edc42c77a3) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4239] [SQL] support view in HiveQlDaoyuan Wang2014-11-1442-17/+5098
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently still not support view like CREATE VIEW view3(valoo) TBLPROPERTIES ("fear" = "factor") AS SELECT upper(value) FROM src WHERE key=86; because the text in metastore for this view is like select \`_c0\` as \`valoo\` from (select upper(\`src\`.\`value\`) from \`default\`.\`src\` where ...) \`view3\` while catalyst cannot resolve \`_c0\` for this query. For view without colname definition in parentheses, it works fine. Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3131 from adrian-wang/view and squashes the following commits: 8a56fd6 [Daoyuan Wang] michael's comments e46c056 [Daoyuan Wang] add some golden file 079290a [Daoyuan Wang] remove useless import 88afcad [Daoyuan Wang] support view in HiveQl (cherry picked from commit ade72c436276237f305d6a6aa4b594d43bcc4743) Signed-off-by: Michael Armbrust <michael@databricks.com>
* Update failed assert text to match code in SizeEstimatorSuiteJeff Hammerbacher2014-11-141-1/+1
| | | | | | | | | | | Author: Jeff Hammerbacher <jeff.hammerbacher@gmail.com> Closes #3242 from hammer/patch-1 and squashes the following commits: f88d635 [Jeff Hammerbacher] Update failed assert text to match code in SizeEstimatorSuite (cherry picked from commit c258db9ed4104b6eefe9f55f3e3959a3c46c2900) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4313][WebUI][Yarn] Fix link issue of the executor thread dump page in ↵zsxwing2014-11-143-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | yarn-cluster mode In yarn-cluster mode, the Web UI is running behind a yarn proxy server. Some features(or bugs?) of yarn proxy server will break the links for thread dump. 1. Yarn proxy server will do http redirect internally, so if opening `http://example.com:8088/cluster/app/application_1415344371838_0012/executors`, it will fetch `http://example.com:8088/cluster/app/application_1415344371838_0012/executors/` and return the content but won't change the link in the browser. Then when a user clicks `Thread Dump`, it will jump to `http://example.com:8088/proxy/application_1415344371838_0012/threadDump/?executorId=2`. This is a wrong link. The correct link should be `http://example.com:8088/proxy/application_1415344371838_0012/executors/threadDump/?executorId=2`. Adding "/" to the tab links will fix it. 2. Yarn proxy server has a bug about the URL encode/decode. When a user accesses `http://example.com:8088/proxy/application_1415344371838_0006/executors/threadDump/?executorId=%3Cdriver%3E`, the yarn proxy server will require `http://example.com:36429/executors/threadDump/?executorId=%25253Cdriver%25253E`. But Spark web server expects `http://example.com:36429/executors/threadDump/?executorId=%3Cdriver%3E`. Related to [YARN-2844](https://issues.apache.org/jira/browse/YARN-2844). For now, it's a tricky approach to bypass the yarn bug. ![threaddump](https://cloud.githubusercontent.com/assets/1000778/4972567/d1ccba64-68ad-11e4-983e-257530cef35a.png) Author: zsxwing <zsxwing@gmail.com> Closes #3183 from zsxwing/SPARK-4313 and squashes the following commits: 3379ca8 [zsxwing] Encode the executor id in the thread dump link and update the comment abfa063 [zsxwing] Fix link issue of the executor thread dump page in yarn-cluster mode (cherry picked from commit 156cf3333dcd93304eb5240f5a6466a3a0311957) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-3663 Document SPARK_LOG_DIR and SPARK_PID_DIRAndrew Ash2014-11-141-1/+8
| | | | | | | | | | | | | | | These descriptions are from the header of spark-daemon.sh Author: Andrew Ash <andrew@andrewash.com> Closes #2518 from ash211/SPARK-3663 and squashes the following commits: 058b257 [Andrew Ash] Complete hanging clause in SPARK_PID_DIR description a17cb4b [Andrew Ash] Update docs for default locations per SPARK-4110 af89096 [Andrew Ash] SPARK-3663 Document SPARK_LOG_DIR and SPARK_PID_DIR (cherry picked from commit 5c265ccde0c5594899ec61f9c1ea100ddff52da7) Signed-off-by: Andrew Or <andrew@databricks.com>
* [Spark Core] SPARK-4380 Edit spilling log from MB to BHong Shen2014-11-141-2/+3
| | | | | | | | | | | | | | | | | https://issues.apache.org/jira/browse/SPARK-4380 Author: Hong Shen <hongshen@tencent.com> Closes #3243 from shenh062326/spark_change and squashes the following commits: 4653378 [Hong Shen] Edit spilling log from MB to B 21ee960 [Hong Shen] Edit spilling log from MB to B e9145e8 [Hong Shen] Edit spilling log from MB to B da761c2 [Hong Shen] Edit spilling log from MB to B 946351c [Hong Shen] Edit spilling log from MB to B (cherry picked from commit 0c56a039a9c5b871422f0fc55ff4394bc077fb34) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4398][PySpark] specialize sc.parallelize(xrange)Xiangrui Meng2014-11-141-4/+21
| | | | | | | | | | | | | | | | `sc.parallelize(range(1 << 20), 1).count()` may take 15 seconds to finish and the rdd object stores the entire list, making task size very large. This PR adds a specialized version for xrange. JoshRosen davies Author: Xiangrui Meng <meng@databricks.com> Closes #3264 from mengxr/SPARK-4398 and squashes the following commits: 8953c41 [Xiangrui Meng] follow davies' suggestion cbd58e3 [Xiangrui Meng] specialize sc.parallelize(xrange) (cherry picked from commit abd581752f9314791a688690c07ad1bb68cc09fe) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* Revert "[SPARK-2703][Core]Make Tachyon related unit tests execute without ↵Patrick Wendell2014-11-143-18/+2
| | | | | | deploying a Tachyon system locally." This reverts commit c127ff8c87fc4f3aa6f09697928832dc6d37cc0f.
* [SPARK-4394][SQL] Data Sources API ImprovementsMichael Armbrust2014-11-149-15/+32
| | | | | | | | | | | | | | | | | | This PR adds two features to the data sources API: - Support for pushing down `IN` filters - The ability for relations to optionally provide information about their `sizeInBytes`. Author: Michael Armbrust <michael@databricks.com> Closes #3260 from marmbrus/sourcesImprovements and squashes the following commits: 9a5e171 [Michael Armbrust] Use method instead of configuration directly 99c0e6b [Michael Armbrust] Add support for sizeInBytes. 416f167 [Michael Armbrust] Support for IN in data sources API. 2a04ab3 [Michael Armbrust] Simplify implementation of InSet. (cherry picked from commit 77e845ca7726ffee2d6f8e33ea56ec005dde3874) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4310][WebUI] Sort 'Submitted' column in Stage page by timezsxwing2014-11-131-1/+3
| | | | | | | | | | | Author: zsxwing <zsxwing@gmail.com> Closes #3179 from zsxwing/SPARK-4310 and squashes the following commits: b0d29f5 [zsxwing] Sort 'Submitted' column in Stage page by time (cherry picked from commit 825709a0b8f9b4bfb2718ecca8efc32be96c5a57) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4372][MLLIB] Make LR and SVM's default parameters consistent in Scala ↵Xiangrui Meng2014-11-1310-79/+95
| | | | | | | | | | | | | | | | | | | | | and Python The current default regParam is 1.0 and regType is claimed to be none in Python (but actually it is l2), while regParam = 0.0 and regType is L2 in Scala. We should make the default values consistent. This PR sets the default regType to L2 and regParam to 0.01. Note that the default regParam value in LIBLINEAR (and hence scikit-learn) is 1.0. However, we use average loss instead of total loss in our formulation. Hence regParam=1.0 is definitely too heavy. In LinearRegression, we set regParam=0.0 and regType=None, because we have separate classes for Lasso and Ridge, both of which use regParam=0.01 as the default. davies atalwalkar Author: Xiangrui Meng <meng@databricks.com> Closes #3232 from mengxr/SPARK-4372 and squashes the following commits: 9979837 [Xiangrui Meng] update Ridge/Lasso to use default regParam 0.01 cast input arguments d3ba096 [Xiangrui Meng] change 'none' back to None 1909a6e [Xiangrui Meng] change default regParam to 0.01 and regType to L2 in LR and SVM (cherry picked from commit 32218307edc6de2b08d5f7a0db6d566081d27197) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4326] fix unidocXiangrui Meng2014-11-136-4/+6
| | | | | | | | | | | | | | | | | | There are two issues: 1. specifying guava 11.0.2 will cause hashInt not found in unidoc (any reason to force the version here?) 2. unidoc doesn't recognize static class defined in a base class aarondav srowen vanzin Author: Xiangrui Meng <meng@databricks.com> Closes #3253 from mengxr/SPARK-4326 and squashes the following commits: 53967bf [Xiangrui Meng] fix unidoc (cherry picked from commit 4b0c1edfdf457cde0e39083c47961184059efded) Signed-off-by: Aaron Davidson <aaron@databricks.com>
* [HOT FIX] make-distribution.sh fails if Yarn shuffle jar DNEAndrew Or2014-11-131-1/+3
| | | | | | | | | | | | | This is introduced in #3147 and is failing builds without the `-Pyarn` profile. Author: Andrew Or <andrew@databricks.com> Closes #3250 from andrewor14/fix-yarn-shuffle-build and squashes the following commits: 42b3d37 [Andrew Or] Do not fail fast if Yarn shuffle jar does not exist (cherry picked from commit a0fa1ba704355a82e168aa9c16ecfed30128ade0) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4378][MLLIB] make ALS more Java-friendlyXiangrui Meng2014-11-133-53/+53
| | | | | | | | | | | | | | Add Java-friendly version of `run` and `predict`, and use bulk prediction in Java unit tests. The user guide update will come later (though we may not save many lines of code there). srowen Author: Xiangrui Meng <meng@databricks.com> Closes #3240 from mengxr/SPARK-4378 and squashes the following commits: 6581503 [Xiangrui Meng] check number of predictions 6c8bbd1 [Xiangrui Meng] make ALS more Java-friendly (cherry picked from commit ca26a212fda39a15fde09dfdb2fbe69580a717f6) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4348] [PySpark] [MLlib] rename random.py to rand.pyDavies Liu2014-11-136-20/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR rename random.py to rand.py to avoid the side affects of conflict with random module, but still keep the same interface as before. ``` >>> from pyspark.mllib.random import RandomRDDs ``` ``` $ pydoc pyspark.mllib.random Help on module random in pyspark.mllib: NAME random - Python package for random data generation. FILE /Users/davies/work/spark/python/pyspark/mllib/rand.py CLASSES __builtin__.object pyspark.mllib.random.RandomRDDs class RandomRDDs(__builtin__.object) | Generator methods for creating RDDs comprised of i.i.d samples from | some distribution. | | Static methods defined here: | | normalRDD(sc, size, numPartitions=None, seed=None) ``` cc mengxr reference link: http://xion.org.pl/2012/05/06/hacking-python-imports/ Author: Davies Liu <davies@databricks.com> Closes #3216 from davies/random and squashes the following commits: 7ac4e8b [Davies Liu] rename random.py to rand.py (cherry picked from commit ce0333f9a008348692bb9a200449d2d992e7825e) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4256] Make Binary Evaluation Metrics functions defined in cases where ↵Andrew Bullen2014-11-122-27/+113
| | | | | | | | | | | | | | | | | | | there ar... ...e 0 positive or 0 negative examples. Author: Andrew Bullen <andrew.bullen@workday.com> Closes #3118 from abull/master and squashes the following commits: c2bf2b1 [Andrew Bullen] [SPARK-4256] Update Code formatting for BinaryClassificationMetricsSpec 36b0533 [Andrew Bullen] [SYMAN-4256] Extract BinaryClassificationMetricsSuite assertions into private method 4d2f79a [Andrew Bullen] [SPARK-4256] Refactor classification metrics tests - extract comparison functions in test f411e70 [Andrew Bullen] [SPARK-4256] Define precision as 1.0 when there are no positive examples; update code formatting per pull request comments d9a09ef [Andrew Bullen] Make Binary Evaluation Metrics functions defined in cases where there are 0 positive or 0 negative examples. (cherry picked from commit 484fecbf1402c25f310be0b0a5ec15c11cbd65c3) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-4370] [Core] Limit number of Netty cores based on executor sizeAaron Davidson2014-11-1217-64/+104
| | | | | | | | | | | | Author: Aaron Davidson <aaron@databricks.com> Closes #3155 from aarondav/conf and squashes the following commits: 7045e77 [Aaron Davidson] Add mesos comment 4770f6e [Aaron Davidson] [SPARK-4370] [Core] Limit number of Netty cores based on executor size (cherry picked from commit b9e1c2eb9b6f7fb609718ef20048a8da452d881b) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4373][MLLIB] fix MLlib maven testsXiangrui Meng2014-11-1236-82/+108
| | | | | | | | | | | | | | We want to make sure there is at most one spark context inside the same jvm. JoshRosen Author: Xiangrui Meng <meng@databricks.com> Closes #3235 from mengxr/SPARK-4373 and squashes the following commits: 6574b69 [Xiangrui Meng] rename LocalSparkContext to MLlibTestSparkContext 913d48d [Xiangrui Meng] make sure there is at most one spark context inside the same jvm (cherry picked from commit 23f5bdf06a388e08ea5a69e848f0ecd5165aa481) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [Release] Bring audit scripts up-to-dateAndrew Or2014-11-124-286/+75
| | | | | | | | This involves a few main changes: - Log all output message to the log file. Previously the log file was not useful because it did not indicate progress. - Remove hive-site.xml in sbt_hive_app to avoid interference - Add the appropriate repositories for new dependencies
* [SPARK-2672] support compressed file in wholeTextFileDavies Liu2014-11-123-13/+103
| | | | | | | | | | | | | | | | The wholeFile() can not read compressed files, it should be, just like textFile(). Author: Davies Liu <davies@databricks.com> Closes #3005 from davies/whole and squashes the following commits: a43fcfb [Davies Liu] remove semicolon c83571a [Davies Liu] remove = if return type is Unit 83c844f [Davies Liu] Merge branch 'master' of github.com:apache/spark into whole 22e8b3e [Davies Liu] support compressed file in wholeTextFile (cherry picked from commit d7d54a44e3ada0e50febe64e9b037dc2c8f6ff61) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [SPARK-4369] [MLLib] fix TreeModel.predict() with RDDDavies Liu2014-11-122-12/+26
| | | | | | | | | | | | | | | Fix TreeModel.predict() with RDD, added tests for it. (Also checked that other models don't have this issue) Author: Davies Liu <davies@databricks.com> Closes #3230 from davies/predict and squashes the following commits: 81172aa [Davies Liu] fix predict (cherry picked from commit bd86118c4e980f94916f892c76fb808fd4c8bd85) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-3666] Extract interfaces for EdgeRDD and VertexRDDAnkur Dave2014-11-124-244/+386
| | | | | | | | | | | | | | | | | | | | This discourages users from calling the VertexRDD and EdgeRDD constructor and makes it easier for future changes to ensure backward compatibility. Author: Ankur Dave <ankurdave@gmail.com> Closes #2530 from ankurdave/SPARK-3666 and squashes the following commits: d681f45 [Ankur Dave] Define getPartitions and compute in abstract class for MIMA 1472390 [Ankur Dave] Merge remote-tracking branch 'apache-spark/master' into SPARK-3666 24201d4 [Ankur Dave] Merge remote-tracking branch 'apache-spark/master' into SPARK-3666 cbe15f2 [Ankur Dave] Remove specialized annotation from VertexRDD and EdgeRDD 931b587 [Ankur Dave] Use abstract class instead of trait for binary compatibility 9ba4ec4 [Ankur Dave] Mark (Vertex|Edge)RDDImpl constructors package-private 620e603 [Ankur Dave] Extract VertexRDD interface and move implementation to VertexRDDImpl 55b6398 [Ankur Dave] Extract EdgeRDD interface and move implementation to EdgeRDDImpl (cherry picked from commit a5ef58113667ff73562ce6db381cff96a0b354b0) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [Release] Correct make-distribution.sh log pathAndrew Or2014-11-121-1/+1
|