| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
PreparedStatement.executeUpdate for DDLs
New changes with JDBCRDD
Author: somideshmukh <somilde@us.ibm.com>
Closes #9733 from somideshmukh/SomilBranch-1.1.
|
|
|
|
|
|
|
|
|
|
|
|
| |
UnsafeHashedRelations
https://issues.apache.org/jira/browse/SPARK-11792
Right now, SizeEstimator will "think" a small UnsafeHashedRelation is several GBs.
Author: Yin Huai <yhuai@databricks.com>
Closes #9788 from yhuai/SPARK-11792.
|
|
|
|
|
|
|
|
| |
I also found a bug with self-joins returning incorrect results in the Dataset API. Two test cases attached and filed SPARK-11803.
Author: Reynold Xin <rxin@databricks.com>
Closes #9789 from rxin/SPARK-11802.
|
|
|
|
|
|
|
|
| |
Support the years between 0 <= year < 1000
Author: Davies Liu <davies@databricks.com>
Closes #9701 from davies/leading_zero.
|
|
|
|
|
|
|
|
|
|
| |
serialization
They were previously using Spark's default serializer for serialization.
Author: Reynold Xin <rxin@databricks.com>
Closes #9787 from rxin/SPARK-11797.
|
|
|
|
|
|
|
|
| |
I also wrote a test case -- but unfortunately the test case is not working due to SPARK-11795.
Author: Reynold Xin <rxin@databricks.com>
Closes #9784 from rxin/SPARK-11503.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the size of cached batch in only controlled by `batchSize` (default value is 10000), which does not work well with the size of serialized columns (for example, complex types). The memory used to build the batch is not accounted, it's easy to OOM (especially after unified memory management).
This PR introduce a hard limit as 4M for total columns (up to 50 columns of uncompressed primitive columns).
This also change the way to grow buffer, double it each time, then trim it once finished.
cc liancheng
Author: Davies Liu <davies@databricks.com>
Closes #9760 from davies/cache_limit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add ARRAY support to `PostgresDialect`.
Nested ARRAY is not allowed for now because it's hard to get the array dimension info. See http://stackoverflow.com/questions/16619113/how-to-get-array-base-type-in-postgres-via-jdbc
Thanks for the initial work from mariusvniekerk !
Close https://github.com/apache/spark/pull/9137
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9662 from cloud-fan/postgre.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the members
Based on the comment of cloud-fan in https://github.com/apache/spark/pull/9216, update the AttributeReference's hashCode function by including the hashCode of the other attributes including name, nullable and qualifiers.
Here, I am not 100% sure if we should include name in the hashCode calculation, since the original hashCode calculation does not include it.
marmbrus cloud-fan Please review if the changes are good.
Author: gatorsmile <gatorsmile@gmail.com>
Closes #9761 from gatorsmile/hashCodeNamedExpression.
|
|
|
|
|
|
|
|
|
|
| |
This PR adds a new option `spark.sql.hive.thriftServer.singleSession` for disabling multi-session support in the Thrift server.
Note that this option is added as a Spark configuration (retrieved from `SparkConf`) rather than Spark SQL configuration (retrieved from `SQLConf`). This is because all SQL configurations are session-ized. Since multi-session support is by default on, no JDBC connection can modify global configurations like the newly added one.
Author: Cheng Lian <lian@databricks.com>
Closes #9740 from liancheng/spark-11089.single-session-option.
|
|
|
|
|
|
|
|
|
|
|
| |
java.util.List[StructField])" in "StructType" gets ClassCastException
In the previous method, fields.toArray will cast java.util.List[StructField] into Array[Object] which can not cast into Array[StructField], thus when invoking this method will throw "java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Lorg.apache.spark.sql.types.StructField;"
I directly cast java.util.List[StructField] into Array[StructField] in this patch.
Author: mayuanwen <mayuanwen@qiyi.com>
Closes #9649 from jackieMaKing/Spark-11679.
|
|
|
|
|
|
|
|
|
|
| |
HiveFunctionRegistry
According to discussion in PR #9664, the anonymous `HiveFunctionRegistry` in `HiveContext` can be removed now.
Author: Cheng Lian <lian@databricks.com>
Closes #9737 from liancheng/spark-11191.follow-up.
|
|
|
|
|
|
|
|
| |
The randomly generated ArrayData used for the UDT `ExamplePoint` in `RowEncoderSuite` sometimes doesn't have enough elements. In this case, this test will fail. This patch is to fix it.
Author: Liang-Chi Hsieh <viirya@appier.com>
Closes #9757 from viirya/fix-randomgenerated-udt.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
between NullType and StringType
During executing PromoteStrings rule, if one side of binaryComparison is StringType and the other side is not StringType, the current code will promote(cast) the StringType to DoubleType, and if the StringType doesn't contain the numbers, it will get null value. So if it is doing <=> (NULL-safe equal) with Null, it will not filter anything, caused the problem reported by this jira.
I proposal to the changes through this PR, can you review my code changes ?
This problem only happen for <=>, other operators works fine.
scala> val filteredDF = df.filter(df("column") > (new Column(Literal(null))))
filteredDF: org.apache.spark.sql.DataFrame = [column: string]
scala> filteredDF.show
+------+
|column|
+------+
+------+
scala> val filteredDF = df.filter(df("column") === (new Column(Literal(null))))
filteredDF: org.apache.spark.sql.DataFrame = [column: string]
scala> filteredDF.show
+------+
|column|
+------+
+------+
scala> df.registerTempTable("DF")
scala> sqlContext.sql("select * from DF where 'column' = NULL")
res27: org.apache.spark.sql.DataFrame = [column: string]
scala> res27.show
+------+
|column|
+------+
+------+
Author: Kevin Yu <qyu@us.ibm.com>
Closes #9720 from kevinyu98/working_on_spark-11447.
|
|
|
|
|
|
|
|
|
|
|
|
| |
metadata and add a test for FIXED_LEN_BYTE_ARRAY
As discussed https://github.com/apache/spark/pull/9660 https://github.com/apache/spark/pull/9060, I cleaned up unused imports, added a test for fixed-length byte array and used a common function for writing metadata for Parquet.
For the test for fixed-length byte array, I have tested and checked the encoding types with [parquet-tools](https://github.com/Parquet/parquet-mr/tree/master/parquet-tools).
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #9754 from HyukjinKwon/SPARK-11694-followup.
|
|
|
|
|
|
|
|
|
|
|
|
| |
current_timestamp).
This patch adds an alias for current_timestamp (now function).
Also fixes SPARK-9196 to re-enable the test case for current_timestamp.
Author: Reynold Xin <rxin@databricks.com>
Closes #9753 from rxin/SPARK-11768.
|
|
|
|
|
|
| |
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9591 from cloud-fan/agg-test.
|
|
|
|
|
|
|
|
| |
This fix is to change the equals method to check all of the specified fields for equality of AttributeReference.
Author: gatorsmile <gatorsmile@gmail.com>
Closes #9216 from gatorsmile/namedExpressEqual.
|
|
|
|
|
|
|
|
|
|
| |
default value
Invocation of getters for type extending AnyVal returns default value (if field value is null) instead of throwing NPE. Please check comments for SPARK-11553 issue for more details.
Author: Bartlomiej Alberski <bartlomiej.alberski@allegrogroup.com>
Closes #9642 from alberskib/bugfix/SPARK-11553.
|
|
|
|
|
|
|
|
|
|
| |
…ishable
Propagate pushed filters to PhyicalRDD in DataSourceStrategy.apply
Author: Zee Chen <zeechen@us.ibm.com>
Closes #9679 from zeocio/spark-11390.
|
|
|
|
|
|
|
|
|
|
| |
These 2 are very similar, we can consolidate them into one.
Also add tests for it and fix a bug.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9729 from cloud-fan/tuple.
|
|
|
|
|
|
|
|
|
|
| |
JIRA: https://issues.apache.org/jira/browse/SPARK-11743
RowEncoder doesn't support UserDefinedType now. We should add the support for it.
Author: Liang-Chi Hsieh <viirya@appier.com>
Closes #9712 from viirya/rowencoder-udt.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
code snippet to reproduce it:
```
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
val t = Timestamp.valueOf("1900-06-11 12:14:50.789")
val us = fromJavaTimestamp(t)
assert(getSeconds(us) === t.getSeconds)
```
it will be good to add a regression test for it, but the reproducing code need to change the default timezone, and even we change it back, the `lazy val defaultTimeZone` in `DataTimeUtils` is fixed.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9728 from cloud-fan/seconds.
|
|
|
|
|
|
|
|
|
| |
When computing partition for non-parquet relation, `HadoopRDD.compute` is used. but it does not set the thread local variable `inputFileName` in `NewSqlHadoopRDD`, like `NewSqlHadoopRDD.compute` does.. Yet, when getting the `inputFileName`, `NewSqlHadoopRDD.inputFileName` is exptected, which is empty now.
Adding the setting inputFileName in HadoopRDD.compute resolves this issue.
Author: xin Wu <xinwu@us.ibm.com>
Closes #9542 from xwu0226/SPARK-11522.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(embedded types)
Parquet supports some JSON and BSON datatypes. They are represented as binary for BSON and string (UTF-8) for JSON internally.
I searched a bit and found Apache drill also supports both in this way, [link](https://drill.apache.org/docs/parquet-format/).
Author: hyukjinkwon <gurwls223@gmail.com>
Author: Hyukjin Kwon <gurwls223@gmail.com>
Closes #9658 from HyukjinKwon/SPARK-11692.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://issues.apache.org/jira/browse/SPARK-11044
Spark writes a parquet file only with writer version1 ignoring the writer version given by user.
So, in this PR, it keeps the writer version if given or sets version1 as default.
Author: hyukjinkwon <gurwls223@gmail.com>
Author: HyukjinKwon <gurwls223@gmail.com>
Closes #9060 from HyukjinKwon/SPARK-11044.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the following options to the JSON data source, for dealing with non-standard JSON files:
* `allowComments` (default `false`): ignores Java/C++ style comment in JSON records
* `allowUnquotedFieldNames` (default `false`): allows unquoted JSON field names
* `allowSingleQuotes` (default `true`): allows single quotes in addition to double quotes
* `allowNumericLeadingZeros` (default `false`): allows leading zeros in numbers (e.g. 00012)
To avoid passing a lot of options throughout the json package, I introduced a new JSONOptions case class to define all JSON config options.
Also updated documentation to explain these options.
Scala
![screen shot 2015-11-15 at 6 12 12 pm](https://cloud.githubusercontent.com/assets/323388/11172965/e3ace6ec-8bc4-11e5-805e-2d78f80d0ed6.png)
Python
![screen shot 2015-11-15 at 6 11 28 pm](https://cloud.githubusercontent.com/assets/323388/11172964/e23ed6ee-8bc4-11e5-8216-312f5983acd5.png)
Author: Reynold Xin <rxin@databricks.com>
Closes #9724 from rxin/SPARK-11745.
|
|
|
|
|
|
|
|
|
|
| |
LogicalLocalTable in ExistingRDD.scala is replaced by localRelation in LocalRelation.scala?
Do you know any reason why we still keep this class?
Author: gatorsmile <gatorsmile@gmail.com>
Closes #9717 from gatorsmile/LogicalLocalTable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
initialization
On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread.
The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance.
yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this?
Author: Yu Gao <ygao@us.ibm.com>
Author: gaoyu <gaoyu@gaoyu-macbookpro.roam.corp.google.com>
Author: Yu Gao <crystalgaoyu@gmail.com>
Closes #9272 from yolandagao/master.
|
|
|
|
|
|
|
|
| |
https://issues.apache.org/jira/browse/SPARK-11738
Author: Yin Huai <yhuai@databricks.com>
Closes #9718 from yhuai/makingArrayOrderable.
|
|
|
|
|
|
|
|
| |
I didn't remove the old Sort operator, since we still use it in randomized tests. I moved it into test module and renamed it ReferenceSort.
Author: Reynold Xin <rxin@databricks.com>
Closes #9700 from rxin/SPARK-11734.
|
|
|
|
|
|
|
|
| |
https://issues.apache.org/jira/browse/SPARK-11736
Author: Yin Huai <yhuai@databricks.com>
Closes #9703 from yhuai/MonotonicallyIncreasingID.
|
|
|
|
|
|
|
|
|
| |
All the physical types are properly tested at `ParquetIOSuite` but logical type mapping is not being tested.
Author: hyukjinkwon <gurwls223@gmail.com>
Author: Hyukjin Kwon <gurwls223@gmail.com>
Closes #9660 from HyukjinKwon/SPARK-11694.
|
|
|
|
|
|
|
|
|
| |
Also introduces new spark private API in RDD.scala with name 'mapPartitionsInternal' which doesn't closure cleans the RDD elements.
Author: nitin goyal <nitin.goyal@guavus.com>
Author: nitin.goyal <nitin.goyal@guavus.com>
Closes #9253 from nitin2goyal/master.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
also add more tests for encoders, and fix bugs that I found:
* when convert array to catalyst array, we can only skip element conversion for native types(e.g. int, long, boolean), not `AtomicType`(String is AtomicType but we need to convert it)
* we should also handle scala `BigDecimal` when convert from catalyst `Decimal`.
* complex map type should be supported
other issues that still in investigation:
* encode java `BigDecimal` and decode it back, seems we will loss precision info.
* when encode case class that defined inside a object, `ClassNotFound` exception will be thrown.
I'll remove unused code in a follow-up PR.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9693 from cloud-fan/split.
|
|
|
|
|
|
|
|
|
|
|
| |
* rename `AppendColumn` to `AppendColumns` to be consistent with the physical plan name.
* clean up stale comments.
* always pass in resolved encoder to `TypedColumn.withInputType`(test added)
* enable a mistakenly disabled java test.
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9688 from cloud-fan/follow.
|
|
|
|
|
|
|
|
|
|
|
|
| |
table.
https://issues.apache.org/jira/browse/SPARK-11678
The change of this PR is to pass root paths of table to the partition discovery logic. So, the process of partition discovery stops at those root paths instead of going all the way to the root path of the file system.
Author: Yin Huai <yhuai@databricks.com>
Closes #9651 from yhuai/SPARK-11678.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds a new method, `reduce`, to `GroupedDataset`, which allows similar operations to `reduceByKey` on a traditional `PairRDD`.
```scala
val ds = Seq("abc", "xyz", "hello").toDS()
ds.groupBy(_.length).reduce(_ + _).collect() // not actually commutative :P
res0: Array(3 -> "abcxyz", 5 -> "hello")
```
While implementing this method and its test cases several more deficiencies were found in our encoder handling. Specifically, in order to support positional resolution, named resolution and tuple composition, it is important to keep the unresolved encoder around and to use it when constructing new `Datasets` with the same object type but different output attributes. We now divide the encoder lifecycle into three phases (that mirror the lifecycle of standard expressions) and have checks at various boundaries:
- Unresoved Encoders: all users facing encoders (those constructed by implicits, static methods, or tuple composition) are unresolved, meaning they have only `UnresolvedAttributes` for named fields and `BoundReferences` for fields accessed by ordinal.
- Resolved Encoders: internal to a `[Grouped]Dataset` the encoder is resolved, meaning all input has been resolved to a specific `AttributeReference`. Any encoders that are placed into a logical plan for use in object construction should be resolved.
- BoundEncoder: Are constructed by physical plans, right before actual conversion from row -> object is performed.
It is left to future work to add explicit checks for resolution and provide good error messages when it fails. We might also consider enforcing the above constraints in the type system (i.e. `fromRow` only exists on a `ResolvedEncoder`), but we should probably wait before spending too much time on this.
Author: Michael Armbrust <michael@databricks.com>
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9673 from marmbrus/pr/9628.
|
|
|
|
|
|
|
|
| |
switched stddev support from DeclarativeAggregate to ImperativeAggregate.
Author: JihongMa <linlin200605@gmail.com>
Closes #9380 from JihongMA/SPARK-11420.
|
|
|
|
|
|
|
|
| |
Parquet supports some unsigned datatypes. However, Since Spark does not support unsigned datatypes, it needs to emit an exception with a clear message rather then with the one saying illegal datatype.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #9646 from HyukjinKwon/SPARK-10113.
|
|
|
|
|
|
|
|
| |
When looking up Hive temporary functions, we should always use the `SessionState` within the execution Hive client, since temporary functions are registered there.
Author: Cheng Lian <lian@databricks.com>
Closes #9664 from liancheng/spark-11191.fix-temp-function.
|
|
|
|
|
|
|
|
|
|
| |
TungstenProject)
Also make full outer join being able to produce UnsafeRows.
Author: Reynold Xin <rxin@databricks.com>
Closes #9643 from rxin/SPARK-11673.
|
|
|
|
|
|
|
|
| |
https://issues.apache.org/jira/browse/SPARK-11661
Author: Yin Huai <yhuai@databricks.com>
Closes #9634 from yhuai/unhandledFilters.
|
|
|
|
|
|
|
|
|
|
|
|
| |
to_unix_timestamp
`to_unix_timestamp` is the deterministic version of `unix_timestamp`, as it accepts at least one parameters.
Since the behavior here is quite similar to `unix_timestamp`, I think the dataframe API is not necessary here.
Author: Daoyuan Wang <daoyuan.wang@intel.com>
Closes #9347 from adrian-wang/to_unix_timestamp.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #9645 from rxin/SPARK-11675.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a pivot method to the dataframe api.
Following the lead of cube and rollup this adds a Pivot operator that is translated into an Aggregate by the analyzer.
Currently the syntax is like:
~~courseSales.pivot(Seq($"year"), $"course", Seq("dotNET", "Java"), sum($"earnings"))~~
~~Would we be interested in the following syntax also/alternatively? and~~
courseSales.groupBy($"year").pivot($"course", "dotNET", "Java").agg(sum($"earnings"))
//or
courseSales.groupBy($"year").pivot($"course").agg(sum($"earnings"))
Later we can add it to `SQLParser`, but as Hive doesn't support it we cant add it there, right?
~~Also what would be the suggested Java friendly method signature for this?~~
Author: Andrew Ray <ray.andrew@gmail.com>
Closes #7841 from aray/sql-pivot.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SparkSubmit tests
This patch aims to reduce the test time and flakiness of HiveSparkSubmitSuite, SparkSubmitSuite, and CliSuite.
Key changes:
- Disable IO synchronization calls for Derby writes, since durability doesn't matter for tests. This was done for HiveCompatibilitySuite in #6651 and resulted in huge test speedups.
- Add a few missing `--conf`s to disable various Spark UIs. The CliSuite, in particular, never disabled these UIs, leaving it prone to port-contention-related flakiness.
- Fix two instances where tests defined `beforeAll()` methods which were never called because the appropriate traits were not mixed in. I updated these tests suites to extend `BeforeAndAfterEach` so that they play nicely with our `ResetSystemProperties` trait.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #9623 from JoshRosen/SPARK-11647.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #9621 from rxin/SPARK-11645.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #9618 from rxin/SPARK-11644.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
classes
This patch modifies Spark's closure cleaner (and a few other places) to use ASM 5, which is necessary in order to support cleaning of closures that were compiled by Java 8.
In order to avoid ASM dependency conflicts, Spark excludes ASM from all of its dependencies and uses a shaded version of ASM 4 that comes from `reflectasm` (see [SPARK-782](https://issues.apache.org/jira/browse/SPARK-782) and #232). This patch updates Spark to use a shaded version of ASM 5.0.4 that was published by the Apache XBean project; the POM used to create the shaded artifact can be found at https://github.com/apache/geronimo-xbean/blob/xbean-4.4/xbean-asm5-shaded/pom.xml.
http://movingfulcrum.tumblr.com/post/80826553604/asm-framework-50-the-missing-migration-guide was a useful resource while upgrading the code to use the new ASM5 opcodes.
I also added a new regression tests in the `java8-tests` subproject; the existing tests were insufficient to catch this bug, which only affected Scala 2.11 user code which was compiled targeting Java 8.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #9512 from JoshRosen/SPARK-6152.
|