aboutsummaryrefslogtreecommitdiff
path: root/examples
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-01-28 19:10:32 -0800
committerReynold Xin <rxin@databricks.com>2015-01-28 19:10:32 -0800
commit5b9760de8dd2dab7cf9a4f5c65869e4ed296a938 (patch)
tree9e6b1f6c6077f3379276de229ccad2e63ff2e2be /examples
parent4ee79c71afc5175ba42b5e3d4088fe23db3e45d1 (diff)
downloadspark-5b9760de8dd2dab7cf9a4f5c65869e4ed296a938.tar.gz
spark-5b9760de8dd2dab7cf9a4f5c65869e4ed296a938.tar.bz2
spark-5b9760de8dd2dab7cf9a4f5c65869e4ed296a938.zip
[SPARK-5445][SQL] Made DataFrame dsl usable in Java
Also removed the literal implicit transformation since it is pretty scary for API design. Instead, created a new lit method for creating literals. This doesn't break anything from a compatibility perspective because Literal was added two days ago. Author: Reynold Xin <rxin@databricks.com> Closes #4241 from rxin/df-docupdate and squashes the following commits: c0f4810 [Reynold Xin] Fix Python merge conflict. 094c7d7 [Reynold Xin] Minor style fix. Reset Python tests. 3c89f4a [Reynold Xin] Package. dfe6962 [Reynold Xin] Updated Python aggregate. 5dd4265 [Reynold Xin] Made dsl Java callable. 14b3c27 [Reynold Xin] Fix literal expression for symbols. 68b31cb [Reynold Xin] Literal. 4cfeb78 [Reynold Xin] [SPARK-5097][SQL] Address DataFrame code review feedback.
Diffstat (limited to 'examples')
-rw-r--r--examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala3
1 files changed, 1 insertions, 2 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
index a5d7f26258..e9f47889f3 100644
--- a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
@@ -19,8 +19,7 @@ package org.apache.spark.examples.sql
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
-import org.apache.spark.sql.dsl._
-import org.apache.spark.sql.dsl.literals._
+import org.apache.spark.sql.api.scala.dsl._
// One method for defining the schema of an RDD is to make a case class with the desired column
// names and types.