aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2015-07-10 18:15:36 -0700
committerCheng Lian <lian@databricks.com>2015-07-10 18:15:36 -0700
commit33630883685eafcc3ee4521ea8363be342f6e6b4 (patch)
tree0e0ca8d10b5e5027e5917e1b5f61ac6f20cd653f /python/pyspark
parentb6fc0adf6874fc26ab27cdaa8ebb28474c0681f0 (diff)
downloadspark-33630883685eafcc3ee4521ea8363be342f6e6b4.tar.gz
spark-33630883685eafcc3ee4521ea8363be342f6e6b4.tar.bz2
spark-33630883685eafcc3ee4521ea8363be342f6e6b4.zip
[SPARK-8961] [SQL] Makes BaseWriterContainer.outputWriterForRow accepts InternalRow instead of Row
This is a follow-up of [SPARK-8888] [1], which also aims to optimize writing dynamic partitions. Three more changes can be made here: 1. Using `InternalRow` instead of `Row` in `BaseWriterContainer.outputWriterForRow` 2. Using `Cast` expressions to convert partition columns to strings, so that we can leverage code generation. 3. Replacing the FP-style `zip` and `map` calls with a faster imperative `while` loop. [1]: https://issues.apache.org/jira/browse/SPARK-8888 Author: Cheng Lian <lian@databricks.com> Closes #7331 from liancheng/spark-8961 and squashes the following commits: b5ab9ae [Cheng Lian] Casts Java iterator to Scala iterator explicitly 719e63b [Cheng Lian] Makes BaseWriterContainer.outputWriterForRow accepts InternalRow instead of Row
Diffstat (limited to 'python/pyspark')
0 files changed, 0 insertions, 0 deletions