diff options
author | Andrew Or <andrew@databricks.com> | 2016-04-25 13:23:05 -0700 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2016-04-25 13:23:05 -0700 |
commit | 3c5e65c339a9b4d5e01375d7f073e444898d34c8 (patch) | |
tree | 039f7e382124f03495e9b22cdc00df7791affeb7 /examples | |
parent | 6bfe42a3be4fbf8bc6f93a4709038fda8ad0610b (diff) | |
download | spark-3c5e65c339a9b4d5e01375d7f073e444898d34c8.tar.gz spark-3c5e65c339a9b4d5e01375d7f073e444898d34c8.tar.bz2 spark-3c5e65c339a9b4d5e01375d7f073e444898d34c8.zip |
[SPARK-14721][SQL] Remove HiveContext (part 2)
## What changes were proposed in this pull request?
This removes the class `HiveContext` itself along with all code usages associated with it. The bulk of the work was already done in #12485. This is mainly just code cleanup and actually removing the class.
Note: A couple of things will break after this patch. These will be fixed separately.
- the python HiveContext
- all the documentation / comments referencing HiveContext
- there will be no more HiveContext in the REPL (fixed by #12589)
## How was this patch tested?
No change in functionality.
Author: Andrew Or <andrew@databricks.com>
Closes #12585 from andrewor14/delete-hive-context.
Diffstat (limited to 'examples')
-rw-r--r-- | examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala b/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala index b654a2c8d4..ff33091621 100644 --- a/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala +++ b/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala @@ -24,7 +24,6 @@ import com.google.common.io.{ByteStreams, Files} import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.sql._ -import org.apache.spark.sql.hive.HiveContext object HiveFromSpark { case class Record(key: Int, value: String) @@ -43,9 +42,9 @@ object HiveFromSpark { // using HiveQL. Users who do not have an existing Hive deployment can still create a // HiveContext. When not configured by the hive-site.xml, the context automatically // creates metastore_db and warehouse in the current directory. - val hiveContext = new HiveContext(sc) - import hiveContext.implicits._ - import hiveContext.sql + val sparkSession = SparkSession.withHiveSupport(sc) + import sparkSession.implicits._ + import sparkSession.sql sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)") sql(s"LOAD DATA LOCAL INPATH '${kv1File.getAbsolutePath}' INTO TABLE src") |