aboutsummaryrefslogtreecommitdiff
path: root/sql/core/src/main/scala/org/apache
diff options
context:
space:
mode:
authorwangzhenhua <wangzhenhua@huawei.com>2017-04-14 19:16:47 +0800
committerWenchen Fan <wenchen@databricks.com>2017-04-14 19:16:47 +0800
commitfb036c4413c2cd4d90880d080f418ec468d6c0fc (patch)
tree8d155d76971538e5ffd6c1f5262653ae813646ce /sql/core/src/main/scala/org/apache
parent7536e2849df6d63587fbf16b4ecb5db06fed7125 (diff)
downloadspark-fb036c4413c2cd4d90880d080f418ec468d6c0fc.tar.gz
spark-fb036c4413c2cd4d90880d080f418ec468d6c0fc.tar.bz2
spark-fb036c4413c2cd4d90880d080f418ec468d6c0fc.zip
[SPARK-20318][SQL] Use Catalyst type for min/max in ColumnStat for ease of estimation
## What changes were proposed in this pull request? Currently when estimating predicates like col > literal or col = literal, we will update min or max in column stats based on literal value. However, literal value is of Catalyst type (internal type), while min/max is of external type. Then for the next predicate, we again need to do type conversion to compare and update column stats. This is awkward and causes many unnecessary conversions in estimation. To solve this, we use Catalyst type for min/max in `ColumnStat`. Note that the persistent format in metastore is still of external type, so there's no inconsistency for statistics in metastore. This pr also fixes a bug for boolean type in `IN` condition. ## How was this patch tested? The changes for ColumnStat are covered by existing tests. For bug fix, a new test for boolean type in IN condition is added Author: wangzhenhua <wangzhenhua@huawei.com> Closes #17630 from wzhfy/refactorColumnStat.
Diffstat (limited to 'sql/core/src/main/scala/org/apache')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala8
1 files changed, 4 insertions, 4 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
index b89014ed8e..0d8db2ff5d 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
@@ -73,10 +73,10 @@ case class AnalyzeColumnCommand(
val relation = sparkSession.table(tableIdent).logicalPlan
// Resolve the column names and dedup using AttributeSet
val resolver = sparkSession.sessionState.conf.resolver
- val attributesToAnalyze = AttributeSet(columnNames.map { col =>
+ val attributesToAnalyze = columnNames.map { col =>
val exprOption = relation.output.find(attr => resolver(attr.name, col))
exprOption.getOrElse(throw new AnalysisException(s"Column $col does not exist."))
- }).toSeq
+ }
// Make sure the column types are supported for stats gathering.
attributesToAnalyze.foreach { attr =>
@@ -99,8 +99,8 @@ case class AnalyzeColumnCommand(
val statsRow = Dataset.ofRows(sparkSession, Aggregate(Nil, namedExpressions, relation)).head()
val rowCount = statsRow.getLong(0)
- val columnStats = attributesToAnalyze.zipWithIndex.map { case (expr, i) =>
- (expr.name, ColumnStat.rowToColumnStat(statsRow.getStruct(i + 1)))
+ val columnStats = attributesToAnalyze.zipWithIndex.map { case (attr, i) =>
+ (attr.name, ColumnStat.rowToColumnStat(statsRow.getStruct(i + 1), attr))
}.toMap
(rowCount, columnStats)
}