aboutsummaryrefslogtreecommitdiff
path: root/sql
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-14362][SPARK-14406][SQL][FOLLOW-UP] DDL Native Support: Drop View and ↵gatorsmile2016-04-1011-23/+23
| | | | | | | | | | | | | | Drop Table #### What changes were proposed in this pull request? This PR is to address the comment: https://github.com/apache/spark/pull/12146#discussion-diff-59092238. It removes the function `isViewSupported` from `SessionCatalog`. After the removal, we still can capture the user errors if users try to drop a table using `DROP VIEW`. #### How was this patch tested? Modified the existing test cases Author: gatorsmile <gatorsmile@gmail.com> Closes #12284 from gatorsmile/followupDropTable.
* [SPARK-14419] [MINOR] coding style cleanupDavies Liu2016-04-102-24/+13
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? Making them more consistent. ## How was this patch tested? Existing tests. Author: Davies Liu <davies@databricks.com> Closes #12289 from davies/cleanup_style.
* [SPARK-14415][SQL] All functions should show usages by command `DESC FUNCTION`Dongjoon Hyun2016-04-1028-25/+489
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Currently, many functions do now show usages like the followings. ``` scala> sql("desc function extended `sin`").collect().foreach(println) [Function: sin] [Class: org.apache.spark.sql.catalyst.expressions.Sin] [Usage: To be added.] [Extended Usage: To be added.] ``` This PR adds descriptions for functions and adds a testcase prevent adding function without usage. ``` scala> sql("desc function extended `sin`").collect().foreach(println); [Function: sin] [Class: org.apache.spark.sql.catalyst.expressions.Sin] [Usage: sin(x) - Returns the sine of x.] [Extended Usage: > SELECT sin(0); 0.0] ``` The only exceptions are `cube`, `grouping`, `grouping_id`, `rollup`, `window`. ## How was this patch tested? Pass the Jenkins tests (including new testcases.) Author: Dongjoon Hyun <dongjoon@apache.org> Closes #12185 from dongjoon-hyun/SPARK-14415.
* [SPARK-14506][SQL] HiveClientImpl's toHiveTable misses a table property for ↵Yin Huai2016-04-092-2/+20
| | | | | | | | | | | | | | | | external tables ## What changes were proposed in this pull request? For an external table's metadata (in Hive's representation), its table type needs to be EXTERNAL_TABLE. Also, there needs to be a field called EXTERNAL set in the table property with a value of TRUE (for a MANAGED_TABLE it will be FALSE) based on https://github.com/apache/hive/blob/release-1.2.1/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L1095-L1105. HiveClientImpl's toHiveTable misses to set this table property. ## How was this patch tested? Added a new test. Author: Yin Huai <yhuai@databricks.com> Closes #12275 from yhuai/SPARK-14506.
* [SPARK-14465][BUILD] Checkstyle should check all Java filesDongjoon Hyun2016-04-092-7/+6
| | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Currently, `checkstyle` is configured to check the files under `src/main/java`. However, Spark has Java files in `src/main/scala`, too. This PR fixes the following configuration in `pom.xml` and the unchecked-so-far violations on those files. ```xml -<sourceDirectory>${basedir}/src/main/java</sourceDirectory> +<sourceDirectories>${basedir}/src/main/java,${basedir}/src/main/scala</sourceDirectories> ``` ## How was this patch tested? After passing the Jenkins build and manually `dev/lint-java`. (Note that Jenkins does not run `lint-java`) Author: Dongjoon Hyun <dongjoon@apache.org> Closes #12242 from dongjoon-hyun/SPARK-14465.
* [SPARK-14217] [SQL] Fix bug if parquet data has columns that use dictionary ↵Nong Li2016-04-092-54/+78
| | | | | | | | | | | | | | | | | | | | | | | | | encoding for some of the data ## What changes were proposed in this pull request? This PR is based on #12017 Currently, this causes batches where some values are dictionary encoded and some which are not. The non-dictionary encoded values cause us to remove the dictionary from the batch causing the first values to return garbage. This patch fixes the issue by first decoding the dictionary for the values that are already dictionary encoded before switching. A similar thing is done for the reverse case where the initial values are not dictionary encoded. ## How was this patch tested? This is difficult to test but replicated on a test cluster using a large tpcds data set. Author: Nong Li <nong@databricks.com> Author: Davies Liu <davies@databricks.com> Closes #12279 from davies/fix_dict.
* [SPARK-14419] [SQL] Improve HashedRelation for key fit within LongDavies Liu2016-04-098-352/+597
| | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Currently, we use java HashMap for HashedRelation if the key could fit within a Long. The java HashMap and CompactBuffer are not memory efficient, the memory used by them is also accounted accurately. This PR introduce a LongToUnsafeRowMap (similar to BytesToBytesMap) for better memory efficiency and performance. This PR reopen #12190 to fix bugs. ## How was this patch tested? Existing tests. Author: Davies Liu <davies@databricks.com> Closes #12278 from davies/long_map3.
* [SPARK-14362][SPARK-14406][SQL] DDL Native Support: Drop View and Drop Tablegatorsmile2016-04-0916-63/+376
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | #### What changes were proposed in this pull request? This PR is to provide a native support for DDL `DROP VIEW` and `DROP TABLE`. The PR includes native parsing and native analysis. Based on the HIVE DDL document for [DROP_VIEW_WEB_LINK](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL- DropView ), `DROP VIEW` is defined as, **Syntax:** ```SQL DROP VIEW [IF EXISTS] [db_name.]view_name; ``` - to remove metadata for the specified view. - illegal to use DROP TABLE on a view. - illegal to use DROP VIEW on a table. - this command only works in `HiveContext`. In `SQLContext`, we will get an exception. This PR also handles `DROP TABLE`. **Syntax:** ```SQL DROP TABLE [IF EXISTS] table_name [PURGE]; ``` - Previously, the `DROP TABLE` command only can drop Hive tables in `HiveContext`. Now, after this PR, this command also can drop temporary table, external table, external data source table in `SQLContext`. - In `HiveContext`, we will not issue an exception if the to-be-dropped table does not exist and users did not specify `IF EXISTS`. Instead, we just log an error message. If `IF EXISTS` is specified, we will not issue any error message/exception. - In `SQLContext`, we will issue an exception if the to-be-dropped table does not exist, unless `IF EXISTS` is specified. - Data will not be deleted if the tables are `external`, unless table type is `managed_table`. #### How was this patch tested? For verifying command parsing, added test cases in `spark/sql/hive/HiveDDLCommandSuite.scala` For verifying command analysis, added test cases in `spark/sql/hive/execution/HiveDDLSuite.scala` Author: gatorsmile <gatorsmile@gmail.com> Author: xiaoli <lixiao1983@gmail.com> Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local> Closes #12146 from gatorsmile/dropView.
* [SPARK-14481][SQL] Issue Exceptions for All Unsupported Options during Parsinggatorsmile2016-04-096-15/+81
| | | | | | | | | | | | | | | #### What changes were proposed in this pull request? "Not good to slightly ignore all the un-supported options/clauses. We should either support it or throw an exception." A comment from yhuai in another PR https://github.com/apache/spark/pull/12146 - Can `Explain` be an exception? The `Formatted` clause is used in `HiveCompatibilitySuite`. - Two unsupported clauses in `Drop Table` are handled in a separate PR: https://github.com/apache/spark/pull/12146 #### How was this patch tested? Test cases are added to verify all the cases. Author: gatorsmile <gatorsmile@gmail.com> Closes #12255 from gatorsmile/warningToException.
* [SPARK-14335][SQL] Describe function command returns wrong outputYong Tang2016-04-093-21/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? …because some of built-in functions are not in function registry. This fix tries to fix issues in `describe function` command where some of the outputs still shows Hive's function because some built-in functions are not in FunctionRegistry. The following built-in functions have been added to FunctionRegistry: ``` - ! * / & % ^ + < <= <=> = == > >= | ~ and in like not or rlike when ``` The following listed functions are not added, but hard coded in `commands.scala` (hvanhovell): ``` != <> between case ``` Below are the existing result of the above functions that have not been added: ``` spark-sql> describe function `!=`; Function: <> Class: org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNotEqual Usage: a <> b - Returns TRUE if a is not equal to b ``` ``` spark-sql> describe function `<>`; Function: <> Class: org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNotEqual Usage: a <> b - Returns TRUE if a is not equal to b ``` ``` spark-sql> describe function `between`; Function: between Class: org.apache.hadoop.hive.ql.udf.generic.GenericUDFBetween Usage: between a [NOT] BETWEEN b AND c - evaluate if a is [not] in between b and c ``` ``` spark-sql> describe function `case`; Function: case Class: org.apache.hadoop.hive.ql.udf.generic.GenericUDFCase Usage: CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END - When a = b, returns c; when a = d, return e; else return f ``` ## How was this patch tested? Existing tests passed. Additional test cases added. Author: Yong Tang <yong.tang.github@outlook.com> Closes #12128 from yongtang/SPARK-14335.
* Revert "[SPARK-14419] [SQL] Improve HashedRelation for key fit within Long"Davies Liu2016-04-098-633/+346
| | | | This reverts commit 90c0a04506a4972b7a2ac2b7dda0c5f8509a6e2f.
* [SPARK-14496][SQL] fix some javadoc typosbomeng2016-04-092-2/+2
| | | | | | | | | | | | | ## What changes were proposed in this pull request? Minor issues. Found 2 typos while browsing the code. ## How was this patch tested? None. Author: bomeng <bmeng@us.ibm.com> Closes #12264 from bomeng/SPARK-14496.
* [SPARK-14419] [SQL] Improve HashedRelation for key fit within LongDavies Liu2016-04-098-346/+633
| | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Currently, we use java HashMap for HashedRelation if the key could fit within a Long. The java HashMap and CompactBuffer are not memory efficient, the memory used by them is also accounted accurately. This PR introduce a LongToUnsafeRowMap (similar to BytesToBytesMap) for better memory efficiency and performance. ## How was this patch tested? Updated existing tests. Author: Davies Liu <davies@databricks.com> Closes #12190 from davies/long_map2.
* [SPARK-14451][SQL] Move encoder definition into Aggregator interfaceReynold Xin2016-04-094-76/+102
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? When we first introduced Aggregators, we required the user of Aggregators to (implicitly) specify the encoders. It would actually make more sense to have the encoders be specified by the implementation of Aggregators, since each implementation should have the most state about how to encode its own data type. Note that this simplifies the Java API because Java users no longer need to explicitly specify encoders for aggregators. ## How was this patch tested? Updated unit tests. Author: Reynold Xin <rxin@databricks.com> Closes #12231 from rxin/SPARK-14451.
* [SPARK-14482][SQL] Change default Parquet codec from gzip to snappyReynold Xin2016-04-084-33/+65
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? Based on our tests, gzip decompression is very slow (< 100MB/s), making queries decompression bound. Snappy can decompress at ~ 500MB/s on a single core. This patch changes the default compression codec for Parquet output from gzip to snappy, and also introduces a ParquetOptions class to be more consistent with other data sources (e.g. CSV, JSON). ## How was this patch tested? Should be covered by existing unit tests. Author: Reynold Xin <rxin@databricks.com> Closes #12256 from rxin/SPARK-14482.
* [SPARK-14498][ML][PYTHON][SQL] Many cleanups to ML and ML-related docsJoseph K. Bradley2016-04-081-0/+4
| | | | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Cleanups to documentation. No changes to code. * GBT docs: Move Scala doc for private object GradientBoostedTrees to public docs for GBTClassifier,Regressor * GLM regParam: needs doc saying it is for L2 only * TrainValidationSplitModel: add .. versionadded:: 2.0.0 * Rename “_transformer_params_from_java” to “_transfer_params_from_java” * LogReg Summary classes: “probability” col should not say “calibrated” * LR summaries: coefficientStandardErrors —> document that intercept stderr comes last. Same for t,p-values * approxCountDistinct: Document meaning of “rsd" argument. * LDA: note which params are for online LDA only ## How was this patch tested? Doc build Author: Joseph K. Bradley <joseph@databricks.com> Closes #12266 from jkbradley/ml-doc-cleanups.
* [SPARK-14454] Better exception handling while marking tasks as failedSameer Agarwal2016-04-081-37/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? This patch adds support for better handling of exceptions inside catch blocks if the code within the block throws an exception. For instance here is the code in a catch block before this change in `WriterContainer.scala`: ```scala logError("Aborting task.", cause) // call failure callbacks first, so we could have a chance to cleanup the writer. TaskContext.get().asInstanceOf[TaskContextImpl].markTaskFailed(cause) if (currentWriter != null) { currentWriter.close() } abortTask() throw new SparkException("Task failed while writing rows.", cause) ``` If `markTaskFailed` or `currentWriter.close` throws an exception, we currently lose the original cause. This PR fixes this problem by implementing a utility function `Utils.tryWithSafeCatch` that suppresses (`Throwable.addSuppressed`) the exception that are thrown within the catch block and rethrowing the original exception. ## How was this patch tested? No new functionality added Author: Sameer Agarwal <sameer@databricks.com> Closes #12234 from sameeragarwal/fix-exception.
* [SPARK-14435][BUILD] Shade Kryo in our custom Hive 1.2.1 forkJosh Rosen2016-04-082-34/+11
| | | | | | | | | | | | | | | | | | This patch updates our custom Hive 1.2.1 fork in order to shade Kryo in Hive. This is a blocker for upgrading Spark to use Kryo 3 (see #12076). The source for this new fork of Hive can be found at https://github.com/JoshRosen/hive/tree/release-1.2.1-spark2 Here's the complete diff from the official Hive 1.2.1 release: https://github.com/apache/hive/compare/release-1.2.1...JoshRosen:release-1.2.1-spark2 Here's the diff from the sources that pwendell used to publish the current `1.2.1.spark` release of Hive: https://github.com/pwendell/hive/compare/release-1.2.1-spark...JoshRosen:release-1.2.1-spark2. This diff looks large because his branch used a shell script to rewrite the groupId, whereas I had to commit the groupId changes in order to prevent the find-and-replace from affecting the package names in our relocated Kryo classes: https://github.com/pwendell/hive/compare/release-1.2.1-spark...JoshRosen:release-1.2.1-spark2#diff-6ada9aaec70e069df8f2c34c5519dd1e Using these changes, I was able to publish a local version of Hive and verify that this change fixes the test failures which are blocking #12076. Note that this PR will not compile until we complete the review of the Hive POM changes and stage and publish a release. /cc vanzin, steveloughran, and pwendell for review. Author: Josh Rosen <joshrosen@databricks.com> Closes #12215 from JoshRosen/shade-kryo-in-hive.
* [SPARK-14394][SQL] Generate AggregateHashMap class for LongTypes during ↵Sameer Agarwal2016-04-082-3/+210
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TungstenAggregate codegen ## What changes were proposed in this pull request? This PR adds support for generating the `AggregateHashMap` class in `TungstenAggregate` if the aggregate group by keys/value are of `LongType`. Note that currently this generate aggregate is not actually used. NB: This currently only supports `LongType` keys/values (please see `isAggregateHashMapSupported` in `TungstenAggregate`) and will be generalized to other data types in a subsequent PR. ## How was this patch tested? Manually inspected the generated code. This is what the generated map looks like for 2 keys: ```java /* 068 */ public class agg_GeneratedAggregateHashMap { /* 069 */ private org.apache.spark.sql.execution.vectorized.ColumnarBatch batch; /* 070 */ private int[] buckets; /* 071 */ private int numBuckets; /* 072 */ private int maxSteps; /* 073 */ private int numRows = 0; /* 074 */ private org.apache.spark.sql.types.StructType schema = /* 075 */ new org.apache.spark.sql.types.StructType() /* 076 */ .add("k1", org.apache.spark.sql.types.DataTypes.LongType) /* 077 */ .add("k2", org.apache.spark.sql.types.DataTypes.LongType) /* 078 */ .add("sum", org.apache.spark.sql.types.DataTypes.LongType); /* 079 */ /* 080 */ public agg_GeneratedAggregateHashMap(int capacity, double loadFactor, int maxSteps) { /* 081 */ assert (capacity > 0 && ((capacity & (capacity - 1)) == 0)); /* 082 */ this.maxSteps = maxSteps; /* 083 */ numBuckets = (int) (capacity / loadFactor); /* 084 */ batch = org.apache.spark.sql.execution.vectorized.ColumnarBatch.allocate(schema, /* 085 */ org.apache.spark.memory.MemoryMode.ON_HEAP, capacity); /* 086 */ buckets = new int[numBuckets]; /* 087 */ java.util.Arrays.fill(buckets, -1); /* 088 */ } /* 089 */ /* 090 */ public agg_GeneratedAggregateHashMap() { /* 091 */ new agg_GeneratedAggregateHashMap(1 << 16, 0.25, 5); /* 092 */ } /* 093 */ /* 094 */ public org.apache.spark.sql.execution.vectorized.ColumnarBatch.Row findOrInsert(long agg_key, long agg_key1) { /* 095 */ long h = hash(agg_key, agg_key1); /* 096 */ int step = 0; /* 097 */ int idx = (int) h & (numBuckets - 1); /* 098 */ while (step < maxSteps) { /* 099 */ // Return bucket index if it's either an empty slot or already contains the key /* 100 */ if (buckets[idx] == -1) { /* 101 */ batch.column(0).putLong(numRows, agg_key); /* 102 */ batch.column(1).putLong(numRows, agg_key1); /* 103 */ batch.column(2).putLong(numRows, 0); /* 104 */ buckets[idx] = numRows++; /* 105 */ return batch.getRow(buckets[idx]); /* 106 */ } else if (equals(idx, agg_key, agg_key1)) { /* 107 */ return batch.getRow(buckets[idx]); /* 108 */ } /* 109 */ idx = (idx + 1) & (numBuckets - 1); /* 110 */ step++; /* 111 */ } /* 112 */ // Didn't find it /* 113 */ return null; /* 114 */ } /* 115 */ /* 116 */ private boolean equals(int idx, long agg_key, long agg_key1) { /* 117 */ return batch.column(0).getLong(buckets[idx]) == agg_key && batch.column(1).getLong(buckets[idx]) == agg_key1; /* 118 */ } /* 119 */ /* 120 */ // TODO: Improve this Hash Function /* 121 */ private long hash(long agg_key, long agg_key1) { /* 122 */ return agg_key ^ agg_key1; /* 123 */ } /* 124 */ /* 125 */ } ``` Author: Sameer Agarwal <sameer@databricks.com> Closes #12161 from sameeragarwal/tungsten-aggregate.
* [SPARK-14448] Improvements to ColumnVectortedyu2016-04-082-22/+36
| | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? In this PR, two changes are proposed for ColumnVector : 1. ColumnVector should be declared as implementing AutoCloseable - it already has close() method 2. In OnHeapColumnVector#reserveInternal(), we only need to allocate new array when existing array is null or the length of existing array is shorter than the newCapacity. ## How was this patch tested? Existing unit tests. Author: tedyu <yuzhihong@gmail.com> Closes #12225 from tedyu/master.
* [SPARK-14402][HOTFIX] Fix ExpressionDescription annotationJacek Laskowski2016-04-081-3/+3
| | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? Fix for the error introduced in https://github.com/apache/spark/commit/c59abad052b7beec4ef550049413e95578e545be: ``` /Users/jacek/dev/oss/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:626: error: annotation argument needs to be a constant; found: "_FUNC_(str) - ".+("Returns str, with the first letter of each word in uppercase, all other letters in ").+("lowercase. Words are delimited by white space.") "Returns str, with the first letter of each word in uppercase, all other letters in " + ^ ``` ## How was this patch tested? Local build Author: Jacek Laskowski <jacek@japila.pl> Closes #12192 from jaceklaskowski/SPARK-14402-HOTFIX.
* [SPARK-14189][SQL] JSON data sources find compatible types even if inferred ↵hyukjinkwon2016-04-082-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | decimal type is not capable of the others ## What changes were proposed in this pull request? https://issues.apache.org/jira/browse/SPARK-14189 When inferred types in the same field during finding compatible `DataType`, are `IntegralType` and `DecimalType` but `DecimalType` is not capable of the given `IntegralType`, JSON data source simply fails to find a compatible type resulting in `StringType`. This can be observed when `prefersDecimal` is enabled. ```scala def mixedIntegerAndDoubleRecords: RDD[String] = sqlContext.sparkContext.parallelize( """{"a": 3, "b": 1.1}""" :: """{"a": 3.1, "b": 1}""" :: Nil) val jsonDF = sqlContext.read .option("prefersDecimal", "true") .json(mixedIntegerAndDoubleRecords) .printSchema() ``` - **Before** ``` root |-- a: string (nullable = true) |-- b: string (nullable = true) ``` - **After** ``` root |-- a: decimal(21, 1) (nullable = true) |-- b: decimal(21, 1) (nullable = true) ``` (Note that integer is inferred as `LongType` which becomes `DecimalType(20, 0)`) ## How was this patch tested? unit tests were used and style tests by `dev/run_tests`. Author: hyukjinkwon <gurwls223@gmail.com> Closes #11993 from HyukjinKwon/SPARK-14189.
* [SPARK-14103][SQL] Parse unescaped quotes in CSV data source.hyukjinkwon2016-04-084-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? This PR resolves the problem during parsing unescaped quotes in input data. For example, currently the data below: ``` "a"b,ccc,ddd e,f,g ``` produces a data below: - **Before** ```bash ["a"b,ccc,ddd[\n]e,f,g] <- as a value. ``` - **After** ```bash ["a"b], [ccc], [ddd] [e], [f], [g] ``` This PR bumps up the Univocity parser's version. This was fixed in `2.0.2`, https://github.com/uniVocity/univocity-parsers/issues/60. ## How was this patch tested? Unit tests in `CSVSuite` and `sbt/sbt scalastyle`. Author: hyukjinkwon <gurwls223@gmail.com> Closes #12226 from HyukjinKwon/SPARK-14103-quote.
* Replace getLocalizedMessage with just normal toString in exception handling ↵Reynold Xin2016-04-071-1/+1
| | | | in WriterContainer.
* [SPARK-14270][SQL] whole stage codegen support for typed filterWenchen Fan2016-04-0711-15/+342
| | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? We implement typed filter by `MapPartitions`, which doesn't work well with whole stage codegen. This PR use `Filter` to implement typed filter and we can get the whole stage codegen support for free. This PR also introduced `DeserializeToObject` and `SerializeFromObject`, to seperate serialization logic from object operator, so that it's eaiser to write optimization rules for adjacent object operators. ## How was this patch tested? existing tests. Author: Wenchen Fan <wenchen@databricks.com> Closes #12061 from cloud-fan/whole-stage-codegen.
* [SPARK-14410][SQL] Push functions existence check into catalogAndrew Or2016-04-0713-114/+126
| | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? This is a followup to #12117 and addresses some of the TODOs introduced there. In particular, the resolution of database is now pushed into session catalog, which knows about the current database. Further, the logic for checking whether a function exists is pushed into the external catalog. No change in functionality is expected. ## How was this patch tested? `SessionCatalogSuite`, `DDLSuite` Author: Andrew Or <andrew@databricks.com> Closes #12198 from andrewor14/function-exists.
* [SPARK-12740] [SPARK-13932] support grouping()/grouping_id() in having/order ↵Davies Liu2016-04-073-56/+211
| | | | | | | | | | | | | | | | | | | | | | clause ## What changes were proposed in this pull request? This PR brings the support of using grouping()/grouping_id() in HAVING/ORDER BY clause. The resolved grouping()/grouping_id() will be replaced by unresolved "spark_gropuing_id" virtual attribute, then resolved by ResolveMissingAttribute. This PR also fix the HAVING clause that access a grouping column that is not presented in SELECT clause, for example: ```sql select count(1) from (select 1 as a) t group by a having a > 0 ``` ## How was this patch tested? Add new tests. Author: Davies Liu <davies@databricks.com> Closes #12235 from davies/grouping_having.
* [SPARK-14456][SQL][MINOR] Remove unused variables and logics in DataSourceKousuke Saruta2016-04-071-10/+0
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? In DataSource#write method, the variables `dataSchema` and `equality`, and related logics are no longer used. Let's remove them. ## How was this patch tested? Existing tests. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #12237 from sarutak/SPARK-14456.
* [SQL][TESTS] Fix for flaky test in ContinuousQueryManagerSuiteTathagata Das2016-04-071-2/+2
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? The timeouts were lower the other timeouts in the test. Other tests were stable over the last month. ## How was this patch tested? Jenkins tests. Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #12219 from tdas/flaky-test-fix.
* [SPARK-10063][SQL] Remove DirectParquetOutputCommitterReynold Xin2016-04-075-191/+5
| | | | | | | | | | | | ## What changes were proposed in this pull request? This patch removes DirectParquetOutputCommitter. This was initially created by Databricks as a faster way to write Parquet data to S3. However, given how the underlying S3 Hadoop implementation works, this committer only works when there are no failures. If there are multiple attempts of the same task (e.g. speculation or task failures or node failures), the output data can be corrupted. I don't think this performance optimization outweighs the correctness issue. ## How was this patch tested? Removed the related tests also. Author: Reynold Xin <rxin@databricks.com> Closes #12229 from rxin/SPARK-10063.
* [SPARK-14452][SQL] Explicit APIs in Scala for specifying encodersReynold Xin2016-04-073-236/+327
| | | | | | | | | | | | ## What changes were proposed in this pull request? The Scala Dataset public API currently only allows users to specify encoders through SQLContext.implicits. This is OK but sometimes people want to explicitly get encoders without a SQLContext (e.g. Aggregator implementations). This patch adds public APIs to Encoders class for getting Scala encoders. ## How was this patch tested? None - I will update test cases once https://github.com/apache/spark/pull/12231 is merged. Author: Reynold Xin <rxin@databricks.com> Closes #12232 from rxin/SPARK-14452.
* [SPARK-14134][CORE] Change the package name used for shading classes.Marcelo Vanzin2016-04-061-2/+1
| | | | | | | | | | | | | | | The current package name uses a dash, which is a little weird but seemed to work. That is, until a new test tried to mock a class that references one of those shaded types, and then things started failing. Most changes are just noise to fix the logging configs. For reference, SPARK-8815 also raised this issue, although at the time it did not cause any issues in Spark, so it was not addressed. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #11941 from vanzin/SPARK-14134.
* [SPARK-12610][SQL] Left Anti JoinHerman van Hovell2016-04-0616-108/+231
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### What changes were proposed in this pull request? This PR adds support for `LEFT ANTI JOIN` to Spark SQL. A `LEFT ANTI JOIN` is the exact opposite of a `LEFT SEMI JOIN` and can be used to identify rows in one dataset that are not in another dataset. Note that `nulls` on the left side of the join cannot match a row on the right hand side of the join; the result is that left anti join will always select a row with a `null` in one or more of its keys. We currently add support for the following SQL join syntax: SELECT * FROM tbl1 A LEFT ANTI JOIN tbl2 B ON A.Id = B.Id Or using a dataframe: tbl1.as("a").join(tbl2.as("b"), $"a.id" === $"b.id", "left_anti) This PR provides serves as the basis for implementing `NOT EXISTS` and `NOT IN (...)` correlated sub-queries. It would also serve as good basis for implementing an more efficient `EXCEPT` operator. The PR has been (losely) based on PR's by both davies (https://github.com/apache/spark/pull/10706) and chenghao-intel (https://github.com/apache/spark/pull/10563); credit should be given where credit is due. This PR adds supports for `LEFT ANTI JOIN` to `BroadcastHashJoin` (including codegeneration), `ShuffledHashJoin` and `BroadcastNestedLoopJoin`. ### How was this patch tested? Added tests to `JoinSuite` and ported `ExistenceJoinSuite` from https://github.com/apache/spark/pull/10563. cc davies chenghao-intel rxin Author: Herman van Hovell <hvanhovell@questtec.nl> Closes #12214 from hvanhovell/SPARK-12610.
* [SPARK-12555][SQL] Result should not be corrupted after input columns are ↵Luciano Resende2016-04-071-0/+19
| | | | | | | | | | reordered This PR add test case described in SPARK-12555 to validate that correct data is returned when input data is reordered and to avoid future regressions. Author: Luciano Resende <lresende@apache.org> Closes #11623 from lresende/SPARK-12555.
* [SPARK-14436][SQL] Make JavaDatasetAggregatorSuiteBase public.Marcelo Vanzin2016-04-062-53/+83
| | | | | | | | | | Without this, unit tests that extend that class fail for me locally on maven, because JUnit tries to run methods in that class and gets an IllegalAccessError. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #12212 from vanzin/SPARK-14436.
* [SPARK-14444][BUILD] Add a new scalastyle `NoScalaDoc` to prevent ↵Dongjoon Hyun2016-04-067-26/+28
| | | | | | | | | | | | | | | | | | | | | ScalaDoc-style multiline comments ## What changes were proposed in this pull request? According to the [Spark Code Style Guide](https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Indentation), this PR adds a new scalastyle rule to prevent the followings. ``` /** In Spark, we don't use the ScalaDoc style so this * is not correct. */ ``` ## How was this patch tested? Pass the Jenkins tests (including `lint-scala`). Author: Dongjoon Hyun <dongjoon@apache.org> Closes #12221 from dongjoon-hyun/SPARK-14444.
* [SPARK-14224] [SPARK-14223] [SPARK-14310] [SQL] fix RowEncoder and parquet ↵Davies Liu2016-04-0613-234/+267
| | | | | | | | | | | | | | | | | | | | | reader for wide table ## What changes were proposed in this pull request? 1) fix the RowEncoder for wide table (many columns) by splitting the generate code into multiple functions. 2) Separate DataSourceScan as RowDataSourceScan and BatchedDataSourceScan 3) Disable the returning columnar batch in parquet reader if there are many columns. 4) Added a internal config for maximum number of fields (nested) columns supported by whole stage codegen. Closes #12098 ## How was this patch tested? Add a tests for table with 1000 columns. Author: Davies Liu <davies@databricks.com> Closes #12047 from davies/many_columns.
* [SPARK-14382][SQL] QueryProgress should be post after committedOffsets is ↵Shixiong Zhu2016-04-062-12/+6
| | | | | | | | | | | | | | | | | | updated ## What changes were proposed in this pull request? Make sure QueryProgress is post after committedOffsets is updated. If QueryProgress is post before committedOffsets is updated, the listener may see a wrong sinkStatus (created from committedOffsets). See https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.2/644/testReport/junit/org.apache.spark.sql.util/ContinuousQueryListenerSuite/single_listener/ for an example of the failure. ## How was this patch tested? Existing unit tests. Author: Shixiong Zhu <shixiong@databricks.com> Closes #12155 from zsxwing/SPARK-14382.
* [SPARK-14320][SQL] Make ColumnarBatch.Row mutableSameer Agarwal2016-04-065-8/+135
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? In order to leverage a data structure like `AggregateHashMap` (https://github.com/apache/spark/pull/12055) to speed up aggregates with keys, we need to make `ColumnarBatch.Row` mutable. ## How was this patch tested? Unit test in `ColumnarBatchSuite`. Also, tested via `BenchmarkWholeStageCodegen`. Author: Sameer Agarwal <sameer@databricks.com> Closes #12103 from sameeragarwal/mutable-row.
* [SPARK-14383][SQL] missing "|" in the g4 filebomeng2016-04-062-1/+8
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? A very trivial one. It missed "|" between DISTRIBUTE and UNSET. ## How was this patch tested? I do not think it is really needed. Author: bomeng <bmeng@us.ibm.com> Closes #12156 from bomeng/SPARK-14383.
* [SPARK-14429][SQL] Improve LIKE pattern in "SHOW TABLES / FUNCTIONS LIKE ↵bomeng2016-04-065-24/+46
| | | | | | | | | | | | | | | | | | | | | | <pattern>" DDL LIKE <pattern> is commonly used in SHOW TABLES / FUNCTIONS etc DDL. In the pattern, user can use `|` or `*` as wildcards. 1. Currently, we used `replaceAll()` to replace `*` with `.*`, but the replacement was scattered in several places; I have created an utility method and use it in all the places; 2. Consistency with Hive: the pattern is case insensitive in Hive and white spaces will be trimmed, but current pattern matching does not do that. For example, suppose we have tables (t1, t2, t3), `SHOW TABLES LIKE ' T* ' ` will list all the t-tables. Please use Hive to verify it. 3. Combined with `|`, the result will be sorted. For pattern like `' B*|a* '`, it will list the result in a-b order. I've made some changes to the utility method to make sure we will get the same result as Hive does. A new method was created in StringUtil and test cases were added. andrewor14 Author: bomeng <bmeng@us.ibm.com> Closes #12206 from bomeng/SPARK-14429.
* [SPARK-14426][SQL] Merge PerserUtils and ParseUtilsKousuke Saruta2016-04-064-137/+143
| | | | | | | | | | | | | | | | | | ## What changes were proposed in this pull request? We have ParserUtils and ParseUtils which are both utility collections for use during the parsing process. Those names and what they are used for is very similar so I think we can merge them. Also, the original unescapeSQLString method may have a fault. When "\u0061" style character literals are passed to the method, it's not unescaped successfully. This patch fix the bug. ## How was this patch tested? Added a new test case. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #12199 from sarutak/merge-ParseUtils-and-ParserUtils.
* [SPARK-14288][SQL] Memory Sink for streamingMichael Armbrust2016-04-065-18/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR exposes the internal testing `MemorySink` though the data source API. This will allow users to easily test streaming applications in the Spark shell or other local tests. Usage: ```scala inputStream.write .format("memory") .queryName("memStream") .startStream() // Now you can query the result of the stream here. sqlContext.table("memStream") ``` The most complicated part of the logic is choosing the checkpoint directory. There are a few requirements we are attempting to satisfy here: - when working in the shell locally, it should just work with no extra configuration. - when working on a cluster you should be able to make it easily create the checkpoint on a distributed file system so you can test aggregation (state checkpoints are also stored in this directory and must be accessible from workers). - it should be clear that you can't resume since the data is just in memory. The chosen algorithm proceeds as follows: - the user gives a checkpoint directory, use it - if the conf has a checkpoint location, use `$location/$queryName` - if neither, create a local directory - always check to make sure there are no offsets written to the directory Author: Michael Armbrust <michael@databricks.com> Closes #12119 from marmbrus/memorySink.
* [SPARK-14396][BUILD][HOT] Fix compilation against Scala 2.10gatorsmile2016-04-061-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | #### What changes were proposed in this pull request? This PR is to fix the compilation errors in Scala 2.10 build, as shown in the link: https://amplab.cs.berkeley.edu/jenkins/job/spark-master-compile-maven-scala-2.10/735/console ``` [error] /home/jenkins/workspace/spark-master-compile-maven-scala-2.10/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveDDLCommandSuite.scala:266: value contains is not a member of Option[String] [error] assert(desc.viewText.contains("SELECT * FROM tab1")) [error] ^ [error] /home/jenkins/workspace/spark-master-compile-maven-scala-2.10/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveDDLCommandSuite.scala:267: value contains is not a member of Option[String] [error] assert(desc.viewOriginalText.contains("SELECT * FROM tab1")) [error] ^ [error] /home/jenkins/workspace/spark-master-compile-maven-scala-2.10/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveDDLCommandSuite.scala:293: value contains is not a member of Option[String] [error] assert(desc.viewText.contains("SELECT * FROM tab1")) [error] ^ [error] /home/jenkins/workspace/spark-master-compile-maven-scala-2.10/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveDDLCommandSuite.scala:294: value contains is not a member of Option[String] [error] assert(desc.viewOriginalText.contains("SELECT * FROM tab1")) [error] ^ [error] four errors found [error] Compile failed at Apr 5, 2016 10:59:09 PM [10.502s] ``` #### How was this patch tested? Not sure how to trigger Scala 2.10 compilation in the test environment. Author: gatorsmile <gatorsmile@gmail.com> Closes #12201 from gatorsmile/buildBreak2.10.
* [SPARK-14396][SQL] Throw Exceptions for DDLs of Partitioned Viewsgatorsmile2016-04-055-44/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | #### What changes were proposed in this pull request? Because the concept of partitioning is associated with physical tables, we disable all the supports of partitioned views, which are defined in the following three commands in [Hive DDL Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterView): ``` ALTER VIEW view DROP [IF EXISTS] PARTITION spec1[, PARTITION spec2, ...]; ALTER VIEW view ADD [IF NOT EXISTS] PARTITION spec; CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ] [COMMENT view_comment] [TBLPROPERTIES (property_name = property_value, ...)] AS SELECT ...; ``` An exception is thrown when users issue any of these three DDL commands. #### How was this patch tested? Added test cases for parsing create view and changed the existing test cases to verify if the exceptions are thrown. Author: gatorsmile <gatorsmile@gmail.com> Author: xiaoli <lixiao1983@gmail.com> Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local> Closes #12169 from gatorsmile/viewPartition.
* [SPARK-14128][SQL] Alter table DDL followupAndrew Or2016-04-052-5/+21
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? This is just a followup to #12121, which implemented the alter table DDLs using the `SessionCatalog`. Specially, this corrects the behavior of setting the location of a datasource table. For datasource tables, we need to set the `locationUri` in addition to the `path` entry in the serde properties. Additionally, changing the location of a datasource table partition is not allowed. ## How was this patch tested? `DDLSuite` Author: Andrew Or <andrew@databricks.com> Closes #12186 from andrewor14/alter-table-ddl-followup.
* [SPARK-14296][SQL] whole stage codegen support for Dataset.mapWenchen Fan2016-04-0611-41/+247
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? This PR adds a new operator `MapElements` for `Dataset.map`, it's a 1-1 mapping and is easier to adapt to whole stage codegen framework. ## How was this patch tested? new test in `WholeStageCodegenSuite` Author: Wenchen Fan <wenchen@databricks.com> Closes #12087 from cloud-fan/map.
* [SPARK-14359] Unit tests for java 8 lambda syntax with typed aggregatesEric Liang2016-04-051-41/+45
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? Adds unit tests for java 8 lambda syntax with typed aggregates as a follow-up to #12168 ## How was this patch tested? Unit tests. Author: Eric Liang <ekl@databricks.com> Closes #12181 from ericl/sc-2794-2.
* [SPARK-529][SQL] Modify SQLConf to use new config API from core.Marcelo Vanzin2016-04-055-495/+420
| | | | | | | | | | | | Because SQL keeps track of all known configs, some customization was needed in SQLConf to allow that, since the core API does not have that feature. Tested via existing (and slightly updated) unit tests. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #11570 from vanzin/SPARK-529-sql.
* [SPARK-14411][SQL] Add a note to warn that onQueryProgress is asynchronousShixiong Zhu2016-04-051-2/+10
| | | | | | | | | | | | | | ## What changes were proposed in this pull request? onQueryProgress is asynchronous so the user may see some future status of `ContinuousQuery`. This PR just updated comments to warn it. ## How was this patch tested? Only updated comments. Author: Shixiong Zhu <shixiong@databricks.com> Closes #12180 from zsxwing/ContinuousQueryListener-doc.