aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/sql/functions.py
diff options
context:
space:
mode:
authorYin Huai <yhuai@databricks.com>2015-11-10 11:06:29 -0800
committerMichael Armbrust <michael@databricks.com>2015-11-10 11:06:29 -0800
commite0701c75601c43f69ed27fc7c252321703db51f2 (patch)
tree52d85dfefce3da304fef585c895667f305cd8238 /python/pyspark/sql/functions.py
parent6e5fc37883ed81c3ee2338145a48de3036d19399 (diff)
downloadspark-e0701c75601c43f69ed27fc7c252321703db51f2.tar.gz
spark-e0701c75601c43f69ed27fc7c252321703db51f2.tar.bz2
spark-e0701c75601c43f69ed27fc7c252321703db51f2.zip
[SPARK-9830][SQL] Remove AggregateExpression1 and Aggregate Operator used to evaluate AggregateExpression1s
https://issues.apache.org/jira/browse/SPARK-9830 This PR contains the following main changes. * Removing `AggregateExpression1`. * Removing `Aggregate` operator, which is used to evaluate `AggregateExpression1`. * Removing planner rule used to plan `Aggregate`. * Linking `MultipleDistinctRewriter` to analyzer. * Renaming `AggregateExpression2` to `AggregateExpression` and `AggregateFunction2` to `AggregateFunction`. * Updating places where we create aggregate expression. The way to create aggregate expressions is `AggregateExpression(aggregateFunction, mode, isDistinct)`. * Changing `val`s in `DeclarativeAggregate`s that touch children of this function to `lazy val`s (when we create aggregate expression in DataFrame API, children of an aggregate function can be unresolved). Author: Yin Huai <yhuai@databricks.com> Closes #9556 from yhuai/removeAgg1.
Diffstat (limited to 'python/pyspark/sql/functions.py')
-rw-r--r--python/pyspark/sql/functions.py2
1 files changed, 1 insertions, 1 deletions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 962f676d40..6e1cbde423 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -382,7 +382,7 @@ def expr(str):
"""Parses the expression string into the column that it represents
>>> df.select(expr("length(name)")).collect()
- [Row('length(name)=5), Row('length(name)=3)]
+ [Row(length(name)=5), Row(length(name)=3)]
"""
sc = SparkContext._active_spark_context
return Column(sc._jvm.functions.expr(str))