aboutsummaryrefslogtreecommitdiff
path: root/sql/catalyst
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2016-03-28 16:26:32 -0700
committerReynold Xin <rxin@databricks.com>2016-03-28 16:26:32 -0700
commitb7836492bb0b5b430539d2bfa20bcc32e3fe3504 (patch)
treeb0daae81a0bc6edae09d2e97128d06907272ef1f /sql/catalyst
parenteebc8c1c95fb7752d09a5846b7cac65f7702c8f2 (diff)
downloadspark-b7836492bb0b5b430539d2bfa20bcc32e3fe3504.tar.gz
spark-b7836492bb0b5b430539d2bfa20bcc32e3fe3504.tar.bz2
spark-b7836492bb0b5b430539d2bfa20bcc32e3fe3504.zip
[SPARK-14155][SQL] Hide UserDefinedType interface in Spark 2.0
## What changes were proposed in this pull request? UserDefinedType is a developer API in Spark 1.x. With very high probability we will create a new API for user-defined type that also works well with column batches as well as encoders (datasets). In Spark 2.0, let's make `UserDefinedType` `private[spark]` first. ## How was this patch tested? Existing unit tests. Author: Reynold Xin <rxin@databricks.com> Closes #11955 from rxin/SPARK-14155.
Diffstat (limited to 'sql/catalyst')
-rw-r--r--sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala6
1 files changed, 4 insertions, 2 deletions
diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala
index dabf9a2fc0..fb7251d71b 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala
@@ -23,7 +23,6 @@ import org.json4s.JsonDSL._
import org.apache.spark.annotation.DeveloperApi
/**
- * ::DeveloperApi::
* The data type for User Defined Types (UDTs).
*
* This interface allows a user to make their own classes more interoperable with SparkSQL;
@@ -35,8 +34,11 @@ import org.apache.spark.annotation.DeveloperApi
*
* The conversion via `serialize` occurs when instantiating a `DataFrame` from another RDD.
* The conversion via `deserialize` occurs when reading from a `DataFrame`.
+ *
+ * Note: This was previously a developer API in Spark 1.x. We are making this private in Spark 2.0
+ * because we will very likely create a new version of this that works better with Datasets.
*/
-@DeveloperApi
+private[spark]
abstract class UserDefinedType[UserType >: Null] extends DataType with Serializable {
/** Underlying storage type for this UDT */