aboutsummaryrefslogtreecommitdiff
path: root/project
diff options
context:
space:
mode:
authorgatorsmile <gatorsmile@gmail.com>2015-11-24 15:54:10 -0800
committerReynold Xin <rxin@databricks.com>2015-11-24 15:54:10 -0800
commit238ae51b66ac12d15fba6aff061804004c5ca6cb (patch)
treedb647d8e9a9527b02e125673eff0848f86001c8e /project
parentc7f95df5c6d8eb2e6f11cf58b704fea34326a5f2 (diff)
downloadspark-238ae51b66ac12d15fba6aff061804004c5ca6cb.tar.gz
spark-238ae51b66ac12d15fba6aff061804004c5ca6cb.tar.bz2
spark-238ae51b66ac12d15fba6aff061804004c5ca6cb.zip
[SPARK-11914][SQL] Support coalesce and repartition in Dataset APIs
This PR is to provide two common `coalesce` and `repartition` in Dataset APIs. After reading the comments of SPARK-9999, I am unclear about the plan for supporting re-partitioning in Dataset APIs. Currently, both RDD APIs and Dataframe APIs provide users such a flexibility to control the number of partitions. In most traditional RDBMS, they expose the number of partitions, the partitioning columns, the table partitioning methods to DBAs for performance tuning and storage planning. Normally, these parameters could largely affect the query performance. Since the actual performance depends on the workload types, I think it is almost impossible to automate the discovery of the best partitioning strategy for all the scenarios. I am wondering if Dataset APIs are planning to hide these APIs from users? Feel free to reject my PR if it does not match the plan. Thank you for your answers. marmbrus rxin cloud-fan Author: gatorsmile <gatorsmile@gmail.com> Closes #9899 from gatorsmile/coalesce.
Diffstat (limited to 'project')
0 files changed, 0 insertions, 0 deletions