aboutsummaryrefslogtreecommitdiff
path: root/sql
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-4244] [SQL] Support Hive Generic UDFs with constant object inspector ↵Cheng Hao2014-11-204-8/+17
| | | | | | | | | | | | | | | | | parameters Query `SELECT named_struct(lower("AA"), "12", lower("Bb"), "13") FROM src LIMIT 1` will throw exception, some of the Hive Generic UDF/UDAF requires the input object inspector is `ConstantObjectInspector`, however, we won't get that before the expression optimization executed. (Constant Folding). This PR is a work around to fix this. (As ideally, the `output` of LogicalPlan should be identical before and after Optimization). Author: Cheng Hao <hao.cheng@intel.com> Closes #3109 from chenghao-intel/optimized and squashes the following commits: 487ff79 [Cheng Hao] rebase to the latest master & update the unittest (cherry picked from commit 84d79ee9ec47465269f7b0a7971176da93c96f3f) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] fix function description mistakeJacky Li2014-11-201-1/+1
| | | | | | | | | | | | | Sample code in the description of SchemaRDD.where is not correct Author: Jacky Li <jacky.likun@gmail.com> Closes #3344 from jackylk/patch-6 and squashes the following commits: 62cd126 [Jacky Li] [SQL] fix function description mistake (cherry picked from commit ad5f1f3ca240473261162c06ffc5aa70d15a5991) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-2918] [SQL] Support the CTAS in EXPLAIN commandCheng Hao2014-11-202-1/+41
| | | | | | | | | | | | | Hive supports the `explain` the CTAS, which was supported by Spark SQL previously, however, seems it was reverted after the code refactoring in HiveQL. Author: Cheng Hao <hao.cheng@intel.com> Closes #3357 from chenghao-intel/explain and squashes the following commits: 7aace63 [Cheng Hao] Support the CTAS in EXPLAIN command (cherry picked from commit 6aa0fc9f4d95f09383cbcb5f79166c60697e6683) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4318][SQL] Fix empty sum distinct.Takuya UESHIN2014-11-204-52/+195
| | | | | | | | | | | | | | | | | | | | | | Executing sum distinct for empty table throws `java.lang.UnsupportedOperationException: empty.reduceLeft`. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3184 from ueshin/issues/SPARK-4318 and squashes the following commits: 8168c42 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-4318 66fdb0a [Takuya UESHIN] Re-refine aggregate functions. 6186eb4 [Takuya UESHIN] Fix Sum of GeneratedAggregate. d2975f6 [Takuya UESHIN] Refine Sum and Average of GeneratedAggregate. 1bba675 [Takuya UESHIN] Refine Sum, SumDistinct and Average functions. 917e533 [Takuya UESHIN] Use aggregate instead of groupBy(). 1a5f874 [Takuya UESHIN] Add tests to be executed as non-partial aggregation. a5a57d2 [Takuya UESHIN] Fix empty Average. 22799dc [Takuya UESHIN] Fix empty Sum and SumDistinct. 65b7dd2 [Takuya UESHIN] Fix empty sum distinct. (cherry picked from commit 2c2e7a44db2ebe44121226f3eac924a0668b991a) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4513][SQL] Support relational operator '<=>' in Spark SQLravipesala2014-11-203-1/+14
| | | | | | | | | | | | | The relational operator '<=>' is not working in Spark SQL. Same works in Spark HiveQL Author: ravipesala <ravindra.pesala@huawei.com> Closes #3387 from ravipesala/<=> and squashes the following commits: 7198e90 [ravipesala] Supporting relational operator '<=>' in Spark SQL (cherry picked from commit 98e9419784a9ad5096cfd563fa9a433786a90bd4) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4228][SQL] SchemaRDD to JSONDan McClary2014-11-204-3/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here's a simple fix for SchemaRDD to JSON. Author: Dan McClary <dan.mcclary@gmail.com> Closes #3213 from dwmclary/SPARK-4228 and squashes the following commits: d714e1d [Dan McClary] fixed PEP 8 error cac2879 [Dan McClary] move pyspark comment and doctest to correct location f9471d3 [Dan McClary] added pyspark doc and doctest 6598cee [Dan McClary] adding complex type queries 1a5fd30 [Dan McClary] removing SPARK-4228 from SQLQuerySuite 4a651f0 [Dan McClary] cleaned PEP and Scala style failures. Moved tests to JsonSuite 47ceff6 [Dan McClary] cleaned up scala style issues 2ee1e70 [Dan McClary] moved rowToJSON to JsonRDD 4387dd5 [Dan McClary] Added UserDefinedType, cleaned up case formatting 8f7bfb6 [Dan McClary] Map type added to SchemaRDD.toJSON 1b11980 [Dan McClary] Map and UserDefinedTypes partially done 11d2016 [Dan McClary] formatting and unicode deserialization default fixed 6af72d1 [Dan McClary] deleted extaneous comment 4d11c0c [Dan McClary] JsonFactory rewrite of toJSON for SchemaRDD 149dafd [Dan McClary] wrapped scala toJSON in sql.py 5e5eb1b [Dan McClary] switched to Jackson for JSON processing 6c94a54 [Dan McClary] added toJSON to pyspark SchemaRDD aaeba58 [Dan McClary] added toJSON to pyspark SchemaRDD 1d171aa [Dan McClary] upated missing brace on if statement 319e3ba [Dan McClary] updated to upstream master with merged SPARK-4228 424f130 [Dan McClary] tests pass, ready for pull and PR 626a5b1 [Dan McClary] added toJSON to SchemaRDD f7d166a [Dan McClary] added toJSON method 5d34e37 [Dan McClary] merge resolved d6d19e9 [Dan McClary] pr example (cherry picked from commit b8e6886fb8ff8f667fb7e600cd727d8649cad1d1) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-3938][SQL] Names in-memory columnar RDD with corresponding table nameCheng Lian2014-11-206-16/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | This PR enables the Web UI storage tab to show the in-memory table name instead of the mysterious query plan string as the name of the in-memory columnar RDD. Note that after #2501, a single columnar RDD can be shared by multiple in-memory tables, as long as their query results are the same. In this case, only the first cached table name is shown. For example: ```sql CACHE TABLE first AS SELECT * FROM src; CACHE TABLE second AS SELECT * FROM src; ``` The Web UI only shows "In-memory table first". <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3383) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3383 from liancheng/columnar-rdd-name and squashes the following commits: 071907f [Cheng Lian] Fixes tests 12ddfa6 [Cheng Lian] Names in-memory columnar RDD with corresponding table name (cherry picked from commit abf29187f0342b607fcefe269391d4db58d2a957) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4468][SQL] Fixes Parquet filter creation for inequality predicates ↵Cheng Lian2014-11-182-4/+16
| | | | | | | | | | | | | | | | | | | | | with literals on the left hand side For expressions like `10 < someVar`, we should create an `Operators.Gt` filter, but right now an `Operators.Lt` is created. This issue affects all inequality predicates with literals on the left hand side. (This bug existed before #3317 and affects branch-1.1. #3338 was opened to backport this to branch-1.1.) <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3334) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3334 from liancheng/fix-parquet-comp-filter and squashes the following commits: 0130897 [Cheng Lian] Fixes Parquet comparison filter generation (cherry picked from commit 423baea953996a66dde671ff6db2fb1f32fbe8cb) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-3721] [PySpark] broadcast objects larger than 2GDavies Liu2014-11-182-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch will bring support for broadcasting objects larger than 2G. pickle, zlib, FrameSerializer and Array[Byte] all can not support objects larger than 2G, so this patch introduce LargeObjectSerializer to serialize broadcast objects, the object will be serialized and compressed into small chunks, it also change the type of Broadcast[Array[Byte]]] into Broadcast[Array[Array[Byte]]]]. Testing for support broadcast objects larger than 2G is slow and memory hungry, so this is tested manually, could be added into SparkPerf. Author: Davies Liu <davies@databricks.com> Author: Davies Liu <davies.liu@gmail.com> Closes #2659 from davies/huge and squashes the following commits: 7b57a14 [Davies Liu] add more tests for broadcast 28acff9 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge a2f6a02 [Davies Liu] bug fix 4820613 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 5875c73 [Davies Liu] address comments 10a349b [Davies Liu] address comments 0c33016 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 6182c8f [Davies Liu] Merge branch 'master' into huge d94b68f [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 2514848 [Davies Liu] address comments fda395b [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 1c2d928 [Davies Liu] fix scala style 091b107 [Davies Liu] broadcast objects larger than 2G (cherry picked from commit 4a377aff2d36b64a65b54192a987aba44b8f78e0) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [SQL] Support partitioned parquet tables that have the key in both the ↵Michael Armbrust2014-11-182-68/+108
| | | | | | | | | | | | | directory and the file Author: Michael Armbrust <michael@databricks.com> Closes #3272 from marmbrus/keyInPartitionedTable and squashes the following commits: 447f08c [Michael Armbrust] Support partitioned parquet tables that have the key in both the directory and the file (cherry picked from commit 90d72ec8502f7ec11d2fe42f08c884ad2159266f) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4453][SPARK-4213][SQL] Simplifies Parquet filter generation codeCheng Lian2014-11-175-693/+161
| | | | | | | | | | | | | | | | | | | | | | While reviewing PR #3083 and #3161, I noticed that Parquet record filter generation code can be simplified significantly according to the clue stated in [SPARK-4453](https://issues.apache.org/jira/browse/SPARK-4213). This PR addresses both SPARK-4453 and SPARK-4213 with this simplification. While generating `ParquetTableScan` operator, we need to remove all Catalyst predicates that have already been pushed down to Parquet. Originally, we first generate the record filter, and then call `findExpression` to traverse the generated filter to find out all pushed down predicates [[1](https://github.com/apache/spark/blob/64c6b9bad559c21f25cd9fbe37c8813cdab939f2/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala#L213-L228)]. In this way, we have to introduce the `CatalystFilter` class hierarchy to bind the Catalyst predicates together with their generated Parquet filter, and complicate the code base a lot. The basic idea of this PR is that, we don't need `findExpression` after filter generation, because we already know a predicate can be pushed down if we can successfully generate its corresponding Parquet filter. SPARK-4213 is fixed by returning `None` for any unsupported predicate type. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3317) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3317 from liancheng/simplify-parquet-filters and squashes the following commits: d6a9499 [Cheng Lian] Fixes import styling issue 43760e8 [Cheng Lian] Simplifies Parquet filter generation logic (cherry picked from commit 36b0956a3eadc7343ed0d25c79a6ce0496eaaccd) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4448] [SQL] unwrap for the ConstantObjectInspectorCheng Hao2014-11-171-4/+32
| | | | | | | | | | | | Author: Cheng Hao <hao.cheng@intel.com> Closes #3308 from chenghao-intel/unwrap_constant_oi and squashes the following commits: 156b500 [Cheng Hao] rebase the master c5b20ab [Cheng Hao] unwrap for the ConstantObjectInspector (cherry picked from commit ef7c464effa1510b24bd8e665e4df6c4839b0c87) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4443][SQL] Fix statistics for external table in spark sql hivew002289702014-11-173-3/+12
| | | | | | | | | | | | | The `totalSize` of external table is always zero, which will influence join strategy(always use broadcast join for external table). Author: w00228970 <wangfei1@huawei.com> Closes #3304 from scwf/statistics and squashes the following commits: 568f321 [w00228970] fix statistics for external table (cherry picked from commit 42389b1780311d90499b4ce2315ceabf5b6ab384) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes ↵Cheng Lian2014-11-174-114/+141
| | | | | | | | | | | | | | | | | | | | | | | for complex types This PR is exactly the same as #3178 except it reverts the `FileStatus.isDir` to `FileStatus.isDirectory` change, since it doesn't compile with Hadoop 1. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3298) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3298 from liancheng/date-for-thriftserver and squashes the following commits: 866037e [Cheng Lian] Revers isDirectory to isDir (it breaks Hadoop 1 profile) 6f71d0b [Cheng Lian] Makes toHiveString static 26fa955 [Cheng Lian] Fixes complex type support in Hive 0.13.1 shim a92882a [Cheng Lian] Updates HiveShim for 0.13.1 73f442b [Cheng Lian] Adds Date support for HiveThriftServer2 (Hive 0.12.0) (cherry picked from commit 6b7f2f753d16ff038881772f1958e3f4fd5597a7) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Construct the MutableRow from an ArrayCheng Hao2014-11-171-2/+4
| | | | | | | | | | | | | Author: Cheng Hao <hao.cheng@intel.com> Closes #3217 from chenghao-intel/mutablerow and squashes the following commits: e8a10bd [Cheng Hao] revert the change of Row object 4681aea [Cheng Hao] Add toMutableRow method in object Row a751838 [Cheng Hao] Construct the MutableRow from an existed row (cherry picked from commit 69e858cc7748b6babadd0cbe20e65f3982161cbf) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4425][SQL] Handle NaN or Infinity cast to Timestamp correctly.Takuya UESHIN2014-11-172-2/+17
| | | | | | | | | | | | | `Cast` from `NaN` or `Infinity` of `Double` or `Float` to `TimestampType` throws `NumberFormatException`. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3283 from ueshin/issues/SPARK-4425 and squashes the following commits: 14def0c [Takuya UESHIN] Fix Cast to be able to handle NaN or Infinity to TimestampType. (cherry picked from commit 566c791931645bfaaaf57ee5a15b9ffad534f81e) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4420][SQL] Change nullability of Cast from DoubleType/FloatType to ↵Takuya UESHIN2014-11-172-2/+14
| | | | | | | | | | | | | | | | | DecimalType. This is follow-up of [SPARK-4390](https://issues.apache.org/jira/browse/SPARK-4390) (#3256). Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3278 from ueshin/issues/SPARK-4420 and squashes the following commits: 7fea558 [Takuya UESHIN] Add some tests. cb2301a [Takuya UESHIN] Fix tests. 133bad5 [Takuya UESHIN] Change nullability of Cast from DoubleType/FloatType to DecimalType. (cherry picked from commit 3a81a1c9e0963173534d96850f3c0b7a16350838) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Makes conjunction pushdown more aggressive for in-memory tableCheng Lian2014-11-172-5/+11
| | | | | | | | | | | | | | | | | This is inspired by the [Parquet record filter generation code](https://github.com/apache/spark/blob/64c6b9bad559c21f25cd9fbe37c8813cdab939f2/sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetFilters.scala#L387-L400). <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3318) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3318 from liancheng/aggresive-conj-pushdown and squashes the following commits: 78b69d2 [Cheng Lian] Makes conjunction pushdown more aggressive (cherry picked from commit 5ce7dae859dc273b0fc532c9456b5960b1eca399) Signed-off-by: Michael Armbrust <michael@databricks.com>
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-174-4/+4
|
* Preparing Spark release v1.2.0-snapshot1Ubuntu2014-11-174-4/+4
|
* Revert "Preparing Spark release v1.2.0-snapshot0"Patrick Wendell2014-11-164-4/+4
| | | | This reverts commit bc09875799aa373f4320d38b02618173ffa4c96f.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-164-8/+8
| | | | This reverts commit 6c6fd218c83a049c874b8a0ea737333c1899c94a.
* [SPARK-4410][SQL] Add support for external sortMichael Armbrust2014-11-164-6/+59
| | | | | | | | | | | | | | | Adds a new operator that uses Spark's `ExternalSort` class. It is off by default now, but we might consider making it the default if benchmarks show that it does not regress performance. Author: Michael Armbrust <michael@databricks.com> Closes #3268 from marmbrus/externalSort and squashes the following commits: 48b9726 [Michael Armbrust] comments b98799d [Michael Armbrust] Add test afd7562 [Michael Armbrust] Add support for external sort. (cherry picked from commit 64c6b9bad559c21f25cd9fbe37c8813cdab939f2) Signed-off-by: Reynold Xin <rxin@databricks.com>
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-174-8/+8
|
* Preparing Spark release v1.2.0-snapshot0Ubuntu2014-11-174-4/+4
|
* Revert "[SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, ↵Michael Armbrust2014-11-164-142/+115
| | | | | | | | | | | | | and fixes for complex types" Author: Michael Armbrust <michael@databricks.com> Closes #3292 from marmbrus/revert4309 and squashes the following commits: 808e96e [Michael Armbrust] Revert "[SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes for complex types" (cherry picked from commit 45ce3273cb618d14ec4d20c4c95699634b951086) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes ↵Cheng Lian2014-11-164-115/+142
| | | | | | | | | | | | | | | | | | | | | | for complex types SPARK-4407 was detected while working on SPARK-4309. Merged these two into a single PR since 1.2.0 RC is approaching. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3178) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3178 from liancheng/date-for-thriftserver and squashes the following commits: 6f71d0b [Cheng Lian] Makes toHiveString static 26fa955 [Cheng Lian] Fixes complex type support in Hive 0.13.1 shim a92882a [Cheng Lian] Updates HiveShim for 0.13.1 73f442b [Cheng Lian] Adds Date support for HiveThriftServer2 (Hive 0.12.0) (cherry picked from commit cb6bd83a91d9b4a227dc6467255231869c1820e2) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4426][SQL][Minor] The symbol of BitwiseOr is wrong, should not be '&'Kousuke Saruta2014-11-151-1/+1
| | | | | | | | | | | | | The symbol of BitwiseOr is defined as '&' but I think it's wrong. It should be '|'. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3284 from sarutak/bitwise-or-symbol-fix and squashes the following commits: aff4be5 [Kousuke Saruta] Fixed symbol of BitwiseOr (cherry picked from commit 84468b2e2031d646dcf035cb18947170ba326ccd) Signed-off-by: Reynold Xin <rxin@databricks.com>
* Added contains(key) to Metadatakai2014-11-142-0/+16
| | | | | | | | | | | | | | Add contains(key) to org.apache.spark.sql.catalyst.util.Metadata to test the existence of a key. Otherwise, Class Metadata's get methods may throw NoSuchElement exception if the key does not exist. Testcases are added to MetadataSuite as well. Author: kai <kaizeng@eecs.berkeley.edu> Closes #3273 from kai-zeng/metadata-fix and squashes the following commits: 74b3d03 [kai] Added contains(key) to Metadata (cherry picked from commit cbddac23696d89b672dce380cc7360a873e27b3b) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4412][SQL] Fix Spark's control of Parquet logging.Jim Carroll2014-11-141-0/+15
| | | | | | | | | | | | | | | | | The Spark ParquetRelation.scala code makes the assumption that the parquet.Log class has already been loaded. If ParquetRelation.enableLogForwarding executes prior to the parquet.Log class being loaded then the code in enableLogForwarding has no affect. ParquetRelation.scala attempts to override the parquet logger but, at least currently (and if your application simply reads a parquet file before it does anything else with Parquet), the parquet.Log class hasn't been loaded yet. Therefore the code in ParquetRelation.enableLogForwarding has no affect. If you look at the code in parquet.Log there's a static initializer that needs to be called prior to enableLogForwarding or whatever enableLogForwarding does gets undone by this static initializer. The "fix" would be to force the static initializer to get called in parquet.Log as part of enableForwardLogging. Author: Jim Carroll <jim@dontcallme.com> Closes #3271 from jimfcarroll/parquet-logging and squashes the following commits: 37bdff7 [Jim Carroll] Fix Spark's control of Parquet logging. (cherry picked from commit 37482ce5a7b875f17d32a5e8c561cc8e9772c9b3) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4365][SQL] Remove unnecessary filter call on records returned from ↵Yash Datta2014-11-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | parquet library Since parquet library has been updated , we no longer need to filter the records returned from parquet library for null records , as now the library skips those : from parquet-hadoop/src/main/java/parquet/hadoop/InternalParquetRecordReader.java public boolean nextKeyValue() throws IOException, InterruptedException { boolean recordFound = false; while (!recordFound) { // no more records left if (current >= total) { return false; } try { checkRead(); currentValue = recordReader.read(); current ++; if (recordReader.shouldSkipCurrentRecord()) { // this record is being filtered via the filter2 package if (DEBUG) LOG.debug("skipping record"); continue; } if (currentValue == null) { // only happens with FilteredRecordReader at end of block current = totalCountLoadedSoFar; if (DEBUG) LOG.debug("filtered record reader reached end of block"); continue; } recordFound = true; if (DEBUG) LOG.debug("read value: " + currentValue); } catch (RuntimeException e) { throw new ParquetDecodingException(format("Can not read value at %d in block %d in file %s", current, currentBlock, file), e); } } return true; } Author: Yash Datta <Yash.Datta@guavus.com> Closes #3229 from saucam/remove_filter and squashes the following commits: 8909ae9 [Yash Datta] SPARK-4365: Remove unnecessary filter call on records returned from parquet library (cherry picked from commit 63ca3af66f9680fd12adee82fb4d342caae5cea4) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4386] Improve performance when writing Parquet files.Jim Carroll2014-11-141-6/+8
| | | | | | | | | | | | | | | If you profile the writing of a Parquet file, the single worst time consuming call inside of org.apache.spark.sql.parquet.MutableRowWriteSupport.write is actually in the scala.collection.AbstractSequence.size call. This is because the size call actually ends up COUNTING the elements in a scala.collection.LinearSeqOptimized.length ("optimized?"). This doesn't need to be done. "size" is called repeatedly where needed rather than called once at the top of the method and stored in a 'val'. Author: Jim Carroll <jim@dontcallme.com> Closes #3254 from jimfcarroll/parquet-perf and squashes the following commits: 30cc0b5 [Jim Carroll] Improve performance when writing Parquet files. (cherry picked from commit f76b9683706232c3d4e8e6e61627b8188dcb79dc) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4322][SQL] Enables struct fields as sub expressions of grouping fieldsCheng Lian2014-11-143-20/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While resolving struct fields, the resulted `GetField` expression is wrapped with an `Alias` to make it a named expression. Assume `a` is a struct instance with a field `b`, then `"a.b"` will be resolved as `Alias(GetField(a, "b"), "b")`. Thus, for this following SQL query: ```sql SELECT a.b + 1 FROM t GROUP BY a.b + 1 ``` the grouping expression is ```scala Add(GetField(a, "b"), Literal(1, IntegerType)) ``` while the aggregation expression is ```scala Add(Alias(GetField(a, "b"), "b"), Literal(1, IntegerType)) ``` This mismatch makes the above SQL query fail during the both analysis and execution phases. This PR fixes this issue by removing the alias when substituting aggregation expressions. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3248) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #3248 from liancheng/spark-4322 and squashes the following commits: 23a46ea [Cheng Lian] Code simplification dd20a79 [Cheng Lian] Should only trim aliases around `GetField`s 7f46532 [Cheng Lian] Enables struct fields as sub expressions of grouping fields (cherry picked from commit 0c7b66bd449093bb5d2dafaf91d54e63e601e320) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Don't shuffle code generated rowsMichael Armbrust2014-11-142-2/+9
| | | | | | | | | | | | | When sort based shuffle and code gen are on we were trying to ship the code generated rows during a shuffle. This doesn't work because the classes don't exist on the other side. Instead we now copy into a generic row before shipping. Author: Michael Armbrust <michael@databricks.com> Closes #3263 from marmbrus/aggCodeGen and squashes the following commits: f6ba8cf [Michael Armbrust] fix and test (cherry picked from commit 4b4b50c9e596673c1534df97effad50d107a8007) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] Minor cleanup of comments, errors and override.Michael Armbrust2014-11-143-10/+12
| | | | | | | | | | | | | | Author: Michael Armbrust <michael@databricks.com> Closes #3257 from marmbrus/minorCleanup and squashes the following commits: d8b5abc [Michael Armbrust] Use interpolation. 2fdf903 [Michael Armbrust] Better error message when coalesce can't be resolved. f9fa6cf [Michael Armbrust] Methods in a final class do not also need to be final, use override. 199fd98 [Michael Armbrust] Fix typo (cherry picked from commit f805025e8efe9cd522e8875141ec27df8d16bbe0) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4391][SQL] Configure parquet filters using SQLConfMichael Armbrust2014-11-145-11/+21
| | | | | | | | | | | | | | | | | This is more uniform with the rest of SQL configuration and allows it to be turned on and off without restarting the SparkContext. In this PR I also turn off filter pushdown by default due to a number of outstanding issues (in particular SPARK-4258). When those are fixed we should turn it back on by default. Author: Michael Armbrust <michael@databricks.com> Closes #3258 from marmbrus/parquetFilters and squashes the following commits: 5655bfe [Michael Armbrust] Remove extra line. 15e9a98 [Michael Armbrust] Enable filters for tests 75afd39 [Michael Armbrust] Fix comments 78fa02d [Michael Armbrust] off by default e7f9e16 [Michael Armbrust] First draft of correctly configuring parquet filter pushdown (cherry picked from commit e47c38763914aaf89a7a851c5f41b7549a75615b) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4390][SQL] Handle NaN cast to decimal correctlyMichael Armbrust2014-11-143-1/+9
| | | | | | | | | | | | Author: Michael Armbrust <michael@databricks.com> Closes #3256 from marmbrus/NanDecimal and squashes the following commits: 4c3ba46 [Michael Armbrust] fix style d360f83 [Michael Armbrust] Handle NaN cast to decimal (cherry picked from commit a0300ea32a9d92bd51c72930bc3979087b0082b2) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4333][SQL] Correctly log number of iterations in RuleExecutorDoingDone92014-11-141-1/+2
| | | | | | | | | | | | | | When iterator of RuleExecutor breaks, the num of iterator should be (iteration - 1) not (iteration ).Because log looks like "Fixed point reached for batch ${batch.name} after 3 iterations.", but it did 2 iterations really! Author: DoingDone9 <799203320@qq.com> Closes #3180 from DoingDone9/issue_01 and squashes the following commits: 571e2ed [DoingDone9] Update RuleExecutor.scala 46514b6 [DoingDone9] When iterator of RuleExecutor breaks, the num of iterator should be iteration - 1 not iteration. (cherry picked from commit 0cbdb01e1c817e71c4f80de05c4e5bb11510b368) Signed-off-by: Michael Armbrust <michael@databricks.com>
* SPARK-4375. no longer require -Pscala-2.10Sandy Ryza2014-11-142-12/+2
| | | | | | | | | | | | | | | It seems like the winds might have moved away from this approach, but wanted to post the PR anyway because I got it working and to show what it would look like. Author: Sandy Ryza <sandy@cloudera.com> Closes #3239 from sryza/sandy-spark-4375 and squashes the following commits: 0ffbe95 [Sandy Ryza] Enable -Dscala-2.11 in sbt cd42d94 [Sandy Ryza] Update doc f6644c3 [Sandy Ryza] SPARK-4375 take 2 (cherry picked from commit f5f757e4ed80759dc5668c63d5663651689f8da8) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4245][SQL] Fix containsNull of the result ArrayType of CreateArray ↵Takuya UESHIN2014-11-145-2/+106
| | | | | | | | | | | | | | | | | | | | | | expression. The `containsNull` of the result `ArrayType` of `CreateArray` should be `true` only if the children is empty or there exists nullable child. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3110 from ueshin/issues/SPARK-4245 and squashes the following commits: 6f64746 [Takuya UESHIN] Move equalsIgnoreNullability method into DataType. 5a90e02 [Takuya UESHIN] Refine InsertIntoHiveType and add some comments. cbecba8 [Takuya UESHIN] Fix a test title. 884ec37 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-4245 3c5274b [Takuya UESHIN] Add tests to insert data of types ArrayType / MapType / StructType with nullability is false into Hive table. 41a94a9 [Takuya UESHIN] Replace InsertIntoTable with InsertIntoHiveTable if data types ignoring nullability are same. 43e6ef5 [Takuya UESHIN] Fix containsNull for empty array. 778e997 [Takuya UESHIN] Fix containsNull of the result ArrayType of CreateArray expression. (cherry picked from commit bbd8f5bee81d5788c356977c173dd1edc42c77a3) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4239] [SQL] support view in HiveQlDaoyuan Wang2014-11-1442-17/+5098
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently still not support view like CREATE VIEW view3(valoo) TBLPROPERTIES ("fear" = "factor") AS SELECT upper(value) FROM src WHERE key=86; because the text in metastore for this view is like select \`_c0\` as \`valoo\` from (select upper(\`src\`.\`value\`) from \`default\`.\`src\` where ...) \`view3\` while catalyst cannot resolve \`_c0\` for this query. For view without colname definition in parentheses, it works fine. Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3131 from adrian-wang/view and squashes the following commits: 8a56fd6 [Daoyuan Wang] michael's comments e46c056 [Daoyuan Wang] add some golden file 079290a [Daoyuan Wang] remove useless import 88afcad [Daoyuan Wang] support view in HiveQl (cherry picked from commit ade72c436276237f305d6a6aa4b594d43bcc4743) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4394][SQL] Data Sources API ImprovementsMichael Armbrust2014-11-149-15/+32
| | | | | | | | | | | | | | | | | | This PR adds two features to the data sources API: - Support for pushing down `IN` filters - The ability for relations to optionally provide information about their `sizeInBytes`. Author: Michael Armbrust <michael@databricks.com> Closes #3260 from marmbrus/sourcesImprovements and squashes the following commits: 9a5e171 [Michael Armbrust] Use method instead of configuration directly 99c0e6b [Michael Armbrust] Add support for sizeInBytes. 416f167 [Michael Armbrust] Support for IN in data sources API. 2a04ab3 [Michael Armbrust] Simplify implementation of InSet. (cherry picked from commit 77e845ca7726ffee2d6f8e33ea56ec005dde3874) Signed-off-by: Reynold Xin <rxin@databricks.com>
* Support cross building for Scala 2.11Prashant Sharma2014-11-112-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's give this another go using a version of Hive that shades its JLine dependency. Author: Prashant Sharma <prashant.s@imaginea.com> Author: Patrick Wendell <pwendell@gmail.com> Closes #3159 from pwendell/scala-2.11-prashant and squashes the following commits: e93aa3e [Patrick Wendell] Restoring -Phive-thriftserver profile and cleaning up build script. f65d17d [Patrick Wendell] Fixing build issue due to merge conflict a8c41eb [Patrick Wendell] Reverting dev/run-tests back to master state. 7a6eb18 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into scala-2.11-prashant 583aa07 [Prashant Sharma] REVERT ME: removed hive thirftserver 3680e58 [Prashant Sharma] Revert "REVERT ME: Temporarily removing some Cli tests." 935fb47 [Prashant Sharma] Revert "Fixed by disabling a few tests temporarily." 925e90f [Prashant Sharma] Fixed by disabling a few tests temporarily. 2fffed3 [Prashant Sharma] Exclude groovy from sbt build, and also provide a way for such instances in future. 8bd4e40 [Prashant Sharma] Switched to gmaven plus, it fixes random failures observer with its predecessor gmaven. 5272ce5 [Prashant Sharma] SPARK_SCALA_VERSION related bugs. 2121071 [Patrick Wendell] Migrating version detection to PySpark b1ed44d [Patrick Wendell] REVERT ME: Temporarily removing some Cli tests. 1743a73 [Patrick Wendell] Removing decimal test that doesn't work with Scala 2.11 f5cad4e [Patrick Wendell] Add Scala 2.11 docs 210d7e1 [Patrick Wendell] Revert "Testing new Hive version with shaded jline" 48518ce [Patrick Wendell] Remove association of Hive and Thriftserver profiles. e9d0a06 [Patrick Wendell] Revert "Enable thritfserver for Scala 2.10 only" 67ec364 [Patrick Wendell] Guard building of thriftserver around Scala 2.10 check 8502c23 [Patrick Wendell] Enable thritfserver for Scala 2.10 only e22b104 [Patrick Wendell] Small fix in pom file ec402ab [Patrick Wendell] Various fixes 0be5a9d [Patrick Wendell] Testing new Hive version with shaded jline 4eaec65 [Prashant Sharma] Changed scripts to ignore target. 5167bea [Prashant Sharma] small correction a4fcac6 [Prashant Sharma] Run against scala 2.11 on jenkins. 80285f4 [Prashant Sharma] MAven equivalent of setting spark.executor.extraClasspath during tests. 034b369 [Prashant Sharma] Setting test jars on executor classpath during tests from sbt. d4874cb [Prashant Sharma] Fixed Python Runner suite. null check should be first case in scala 2.11. 6f50f13 [Prashant Sharma] Fixed build after rebasing with master. We should use ${scala.binary.version} instead of just 2.10 e56ca9d [Prashant Sharma] Print an error if build for 2.10 and 2.11 is spotted. 937c0b8 [Prashant Sharma] SCALA_VERSION -> SPARK_SCALA_VERSION cb059b0 [Prashant Sharma] Code review 0476e5e [Prashant Sharma] Scala 2.11 support with repl and all build changes. (cherry picked from commit daaca14c16dc2c1abc98f15ab8c6f7c14761b627) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4274] [SQL] Fix NPE in printing the details of the query planCheng Hao2014-11-101-1/+1
| | | | | | | | | | | Author: Cheng Hao <hao.cheng@intel.com> Closes #3139 from chenghao-intel/comparison_test and squashes the following commits: f5d7146 [Cheng Hao] avoid exception in printing the codegen enabled (cherry picked from commit c764d0ac1c6410ca2dd2558cb6bcbe8ad5f02481) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4149][SQL] ISO 8601 support for json date time stringsDaoyuan Wang2014-11-103-2/+40
| | | | | | | | | | | | | This implement the feature davies mentioned in https://github.com/apache/spark/pull/2901#discussion-diff-19313312 Author: Daoyuan Wang <daoyuan.wang@intel.com> Closes #3012 from adrian-wang/iso8601 and squashes the following commits: 50df6e7 [Daoyuan Wang] json data timestamp ISO8601 support (cherry picked from commit a1fc059b69c9ed150bf8a284404cc149ddaa27d6) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4250] [SQL] Fix bug of constant null value mapping to ↵Cheng Hao2014-11-1011-86/+199
| | | | | | | | | | | | | | | ConstantObjectInspector Author: Cheng Hao <hao.cheng@intel.com> Closes #3114 from chenghao-intel/constant_null_oi and squashes the following commits: e603bda [Cheng Hao] fix the bug of null value for primitive types 50a13ba [Cheng Hao] fix the timezone issue f54f369 [Cheng Hao] fix bug of constant null value for ObjectInspector (cherry picked from commit fa777833b52b6f339cdc335e8e3935cfe9a2a7eb) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] remove a decimal case branch that has no effect at runtimeXiangrui Meng2014-11-101-1/+0
| | | | | | | | | | | | | it generates warnings at compile time marmbrus Author: Xiangrui Meng <meng@databricks.com> Closes #3192 from mengxr/dtc-decimal and squashes the following commits: 955e9fb [Xiangrui Meng] remove a decimal case branch that has no effect (cherry picked from commit d793d80c8084923ea04dcf7d268eec8ede490127) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SPARK-4319][SQL] Enable an ignored test "null count".Takuya UESHIN2014-11-102-9/+9
| | | | | | | | | | | Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #3185 from ueshin/issues/SPARK-4319 and squashes the following commits: a44a38e [Takuya UESHIN] Enable an ignored test "null count". (cherry picked from commit dbf10588de03e8ea993fff687a78727eff55db1f) Signed-off-by: Michael Armbrust <michael@databricks.com>
* [SQL] support udt to hive types conversion (hive->udt is not supported)Xiangrui Meng2014-11-102-1/+9
| | | | | | | | | | | | | marmbrus Author: Xiangrui Meng <meng@databricks.com> Closes #3164 from mengxr/hive-udt and squashes the following commits: 57c7519 [Xiangrui Meng] support udt->hive types (hive->udt is not supported) (cherry picked from commit 894a7245c379b2e823ae7d81cc9228e60ba47c78) Signed-off-by: Michael Armbrust <michael@databricks.com>
* SPARK-1209 [CORE] (Take 2) SparkHadoop{MapRed,MapReduce}Util should not use ↵Sean Owen2014-11-092-0/+2
| | | | | | | | | | | | | | | | | | | | | | | package org.apache.hadoop andrewor14 Another try at SPARK-1209, to address https://github.com/apache/spark/pull/2814#issuecomment-61197619 I successfully tested with `mvn -Dhadoop.version=1.0.4 -DskipTests clean package; mvn -Dhadoop.version=1.0.4 test` I assume that is what failed Jenkins last time. I also tried `-Dhadoop.version1.2.1` and `-Phadoop-2.4 -Pyarn -Phive` for more coverage. So this is why the class was put in `org.apache.hadoop` to begin with, I assume. One option is to leave this as-is for now and move it only when Hadoop 1.0.x support goes away. This is the other option, which adds a call to force the constructor to be public at run-time. It's probably less surprising than putting Spark code in `org.apache.hadoop`, but, does involve reflection. A `SecurityManager` might forbid this, but it would forbid a lot of stuff Spark does. This would also only affect Hadoop 1.0.x it seems. Author: Sean Owen <sowen@cloudera.com> Closes #3048 from srowen/SPARK-1209 and squashes the following commits: 0d48f4b [Sean Owen] For Hadoop 1.0.x, make certain constructors public, which were public in later versions 466e179 [Sean Owen] Disable MIMA warnings resulting from moving the class -- this was also part of the PairRDDFunctions type hierarchy though? eb61820 [Sean Owen] Move SparkHadoopMapRedUtil / SparkHadoopMapReduceUtil from org.apache.hadoop to org.apache.spark (cherry picked from commit f8e5732307dcb1482d9bcf1162a1090ef9a7b913) Signed-off-by: Patrick Wendell <pwendell@gmail.com>