aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2015-04-01 21:34:45 +0800
committerCheng Lian <lian@databricks.com>2015-04-01 21:34:45 +0800
commitd36c5fca7b9227c4c6e1b0c1455269b5fd8d4852 (patch)
treeb9c314ba0fcad297ebe1b3ff0ccaefd6222ecbb8 /sql
parent0358b08db85b3ee4ae70834626e7a42311bcc635 (diff)
downloadspark-d36c5fca7b9227c4c6e1b0c1455269b5fd8d4852.tar.gz
spark-d36c5fca7b9227c4c6e1b0c1455269b5fd8d4852.tar.bz2
spark-d36c5fca7b9227c4c6e1b0c1455269b5fd8d4852.zip
[SPARK-6608] [SQL] Makes DataFrame.rdd a lazy val
Before 1.3.0, `SchemaRDD.id` works as a unique identifier of each `SchemaRDD`. In 1.3.0, unlike `SchemaRDD`, `DataFrame` is no longer an RDD, and `DataFrame.rdd` is actually a function which always returns a new RDD instance. Making `DataFrame.rdd` a lazy val should bring the unique identifier back. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5265) <!-- Reviewable:end --> Author: Cheng Lian <lian@databricks.com> Closes #5265 from liancheng/spark-6608 and squashes the following commits: 7500968 [Cheng Lian] Updates javadoc 7f37d21 [Cheng Lian] Makes DataFrame.rdd a lazy val
Diffstat (limited to 'sql')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala6
1 files changed, 4 insertions, 2 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
index 5cd0a18ff6..19cfa15f27 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
@@ -952,10 +952,12 @@ class DataFrame private[sql](
/////////////////////////////////////////////////////////////////////////////
/**
- * Returns the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s.
+ * Represents the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s. Note that the RDD is
+ * memoized. Once called, it won't change even if you change any query planning related Spark SQL
+ * configurations (e.g. `spark.sql.shuffle.partitions`).
* @group rdd
*/
- def rdd: RDD[Row] = {
+ lazy val rdd: RDD[Row] = {
// use a local variable to make sure the map closure doesn't capture the whole DataFrame
val schema = this.schema
queryExecution.executedPlan.execute().map(ScalaReflection.convertRowToScala(_, schema))