aboutsummaryrefslogtreecommitdiff
path: root/sql/core
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-01-14 00:38:55 -0800
committerReynold Xin <rxin@databricks.com>2015-01-14 00:38:55 -0800
commitd5eeb35167e1ab72fab7778757163ff0aacaef2c (patch)
treec71d81126126e8810519a384c214601e3edf7ca7 /sql/core
parentf9969098c8cb15e36c718b80c6cf5b534a6cf7c3 (diff)
downloadspark-d5eeb35167e1ab72fab7778757163ff0aacaef2c.tar.gz
spark-d5eeb35167e1ab72fab7778757163ff0aacaef2c.tar.bz2
spark-d5eeb35167e1ab72fab7778757163ff0aacaef2c.zip
[SPARK-5167][SQL] Move Row into sql package and make it usable for Java.
Mostly just moving stuff around. This should still be source compatible since we type aliased Row previously in org.apache.spark.sql.Row. Added the following APIs to Row: ```scala def getMap[K, V](i: Int): scala.collection.Map[K, V] def getJavaMap[K, V](i: Int): java.util.Map[K, V] def getSeq[T](i: Int): Seq[T] def getList[T](i: Int): java.util.List[T] def getStruct(i: Int): StructType ``` Author: Reynold Xin <rxin@databricks.com> Closes #4030 from rxin/sql-row and squashes the following commits: 6c85c29 [Reynold Xin] Fixed style violation by adding a new line to Row.scala. 82b064a [Reynold Xin] [SPARK-5167][SQL] Move Row into sql package and make it usable for Java.
Diffstat (limited to 'sql/core')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala2
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/package.scala83
2 files changed, 1 insertions, 84 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala b/sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala
index 207e2805ff..4faa79af25 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala
@@ -23,7 +23,7 @@ import scala.collection.JavaConversions
import scala.math.BigDecimal
import org.apache.spark.api.java.JavaUtils.mapAsSerializableJavaMap
-import org.apache.spark.sql.catalyst.expressions.{Row => ScalaRow}
+import org.apache.spark.sql.{Row => ScalaRow}
/**
* A result row from a Spark SQL query.
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/package.scala b/sql/core/src/main/scala/org/apache/spark/sql/package.scala
index b75266d5aa..6dd39be807 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/package.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/package.scala
@@ -34,89 +34,6 @@ import org.apache.spark.sql.execution.SparkPlan
package object sql {
/**
- * :: DeveloperApi ::
- *
- * Represents one row of output from a relational operator.
- * @group row
- */
- @DeveloperApi
- type Row = catalyst.expressions.Row
-
- /**
- * :: DeveloperApi ::
- *
- * A [[Row]] object can be constructed by providing field values. Example:
- * {{{
- * import org.apache.spark.sql._
- *
- * // Create a Row from values.
- * Row(value1, value2, value3, ...)
- * // Create a Row from a Seq of values.
- * Row.fromSeq(Seq(value1, value2, ...))
- * }}}
- *
- * A value of a row can be accessed through both generic access by ordinal,
- * which will incur boxing overhead for primitives, as well as native primitive access.
- * An example of generic access by ordinal:
- * {{{
- * import org.apache.spark.sql._
- *
- * val row = Row(1, true, "a string", null)
- * // row: Row = [1,true,a string,null]
- * val firstValue = row(0)
- * // firstValue: Any = 1
- * val fourthValue = row(3)
- * // fourthValue: Any = null
- * }}}
- *
- * For native primitive access, it is invalid to use the native primitive interface to retrieve
- * a value that is null, instead a user must check `isNullAt` before attempting to retrieve a
- * value that might be null.
- * An example of native primitive access:
- * {{{
- * // using the row from the previous example.
- * val firstValue = row.getInt(0)
- * // firstValue: Int = 1
- * val isNull = row.isNullAt(3)
- * // isNull: Boolean = true
- * }}}
- *
- * Interfaces related to native primitive access are:
- *
- * `isNullAt(i: Int): Boolean`
- *
- * `getInt(i: Int): Int`
- *
- * `getLong(i: Int): Long`
- *
- * `getDouble(i: Int): Double`
- *
- * `getFloat(i: Int): Float`
- *
- * `getBoolean(i: Int): Boolean`
- *
- * `getShort(i: Int): Short`
- *
- * `getByte(i: Int): Byte`
- *
- * `getString(i: Int): String`
- *
- * Fields in a [[Row]] object can be extracted in a pattern match. Example:
- * {{{
- * import org.apache.spark.sql._
- *
- * val pairs = sql("SELECT key, value FROM src").rdd.map {
- * case Row(key: Int, value: String) =>
- * key -> value
- * }
- * }}}
- *
- * @group row
- */
- @DeveloperApi
- val Row = catalyst.expressions.Row
-
- /**
* Converts a logical plan into zero or more SparkPlans.
*/
@DeveloperApi