diff options
author | Sean Owen <sowen@cloudera.com> | 2015-11-05 09:08:53 +0000 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2015-11-05 09:08:53 +0000 |
commit | 6f81eae24f83df51a99d4bb2629dd7daadc01519 (patch) | |
tree | 79b7d20c8381b97afb48cfd92ce940297a7f6ea5 /sql/hive | |
parent | 81498dd5c86ca51d2fb351c8ef52cbb28e6844f4 (diff) | |
download | spark-6f81eae24f83df51a99d4bb2629dd7daadc01519.tar.gz spark-6f81eae24f83df51a99d4bb2629dd7daadc01519.tar.bz2 spark-6f81eae24f83df51a99d4bb2629dd7daadc01519.zip |
[SPARK-11440][CORE][STREAMING][BUILD] Declare rest of @Experimental items non-experimental if they've existed since 1.2.0
Remove `Experimental` annotations in core, streaming for items that existed in 1.2.0 or before. The changes are:
* SparkContext
* binary{Files,Records} : 1.2.0
* submitJob : 1.0.0
* JavaSparkContext
* binary{Files,Records} : 1.2.0
* DoubleRDDFunctions, JavaDoubleRDD
* {mean,sum}Approx : 1.0.0
* PairRDDFunctions, JavaPairRDD
* sampleByKeyExact : 1.2.0
* countByKeyApprox : 1.0.0
* PairRDDFunctions
* countApproxDistinctByKey : 1.1.0
* RDD
* countApprox, countByValueApprox, countApproxDistinct : 1.0.0
* JavaRDDLike
* countApprox : 1.0.0
* PythonHadoopUtil.Converter : 1.1.0
* PortableDataStream : 1.2.0 (related to binaryFiles)
* BoundedDouble : 1.0.0
* PartialResult : 1.0.0
* StreamingContext, JavaStreamingContext
* binaryRecordsStream : 1.2.0
* HiveContext
* analyze : 1.2.0
Author: Sean Owen <sowen@cloudera.com>
Closes #9396 from srowen/SPARK-11440.
Diffstat (limited to 'sql/hive')
-rw-r--r-- | sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala index 1f51353203..670d6a78e3 100644 --- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala +++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala @@ -36,7 +36,6 @@ import org.apache.hadoop.hive.ql.metadata.Table import org.apache.hadoop.hive.ql.parse.VariableSubstitution import org.apache.hadoop.hive.serde2.io.{DateWritable, TimestampWritable} -import org.apache.spark.annotation.Experimental import org.apache.spark.api.java.JavaSparkContext import org.apache.spark.sql.SQLConf.SQLConfEntry import org.apache.spark.sql.SQLConf.SQLConfEntry._ @@ -356,7 +355,6 @@ class HiveContext private[hive]( * * @since 1.2.0 */ - @Experimental def analyze(tableName: String) { val tableIdent = SqlParser.parseTableIdentifier(tableName) val relation = EliminateSubQueries(catalog.lookupRelation(tableIdent)) |