diff options
author | Qifan Pu <qifan.pu@gmail.com> | 2016-07-24 21:53:21 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-07-24 21:54:42 -0700 |
commit | 468a3c3ac5d039f21613f9237c7bdef9b92f5fea (patch) | |
tree | 6e7631f491d696ad33326218fc6ac4626a008542 /sql/catalyst/src/main | |
parent | daace6014216b996bcc8937f1fdcea732b6910ca (diff) | |
download | spark-468a3c3ac5d039f21613f9237c7bdef9b92f5fea.tar.gz spark-468a3c3ac5d039f21613f9237c7bdef9b92f5fea.tar.bz2 spark-468a3c3ac5d039f21613f9237c7bdef9b92f5fea.zip |
[SPARK-16699][SQL] Fix performance bug in hash aggregate on long string keys
In the following code in `VectorizedHashMapGenerator.scala`:
```
def hashBytes(b: String): String = {
val hash = ctx.freshName("hash")
s"""
|int $result = 0;
|for (int i = 0; i < $b.length; i++) {
| ${genComputeHash(ctx, s"$b[i]", ByteType, hash)}
| $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) + ($result >>> 2);
|}
""".stripMargin
}
```
when b=input.getBytes(), the current 2.0 code results in getBytes() being called n times, n being length of input. getBytes() involves memory copy is thus expensive and causes a performance degradation.
Fix is to evaluate getBytes() before the for loop.
Performance bug, no additional test added.
Author: Qifan Pu <qifan.pu@gmail.com>
Closes #14337 from ooq/SPARK-16699.
(cherry picked from commit d226dce12babcd9f30db033417b2b9ce79f44312)
Signed-off-by: Reynold Xin <rxin@databricks.com>
Diffstat (limited to 'sql/catalyst/src/main')
0 files changed, 0 insertions, 0 deletions