aboutsummaryrefslogtreecommitdiff
path: root/examples/src/main/scala
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2016-07-23 11:41:24 -0700
committerReynold Xin <rxin@databricks.com>2016-07-23 11:41:24 -0700
commit53b2456d1de38b9d4f18509e7b36eb3fbe09e050 (patch)
tree7a783f09648b4c86ec04b9fd26e9ef6871f2d352 /examples/src/main/scala
parent86c275206605c44e1ebca2f166d62868e44bf029 (diff)
downloadspark-53b2456d1de38b9d4f18509e7b36eb3fbe09e050.tar.gz
spark-53b2456d1de38b9d4f18509e7b36eb3fbe09e050.tar.bz2
spark-53b2456d1de38b9d4f18509e7b36eb3fbe09e050.zip
[SPARK-16380][EXAMPLES] Update SQL examples and programming guide for Python language binding
This PR is based on PR #14098 authored by wangmiao1981. ## What changes were proposed in this pull request? This PR replaces the original Python Spark SQL example file with the following three files: - `sql/basic.py` Demonstrates basic Spark SQL features. - `sql/datasource.py` Demonstrates various Spark SQL data sources. - `sql/hive.py` Demonstrates Spark SQL Hive interaction. This PR also removes hard-coded Python example snippets in the SQL programming guide by extracting snippets from the above files using the `include_example` Liquid template tag. ## How was this patch tested? Manually tested. Author: wm624@hotmail.com <wm624@hotmail.com> Author: Cheng Lian <lian@databricks.com> Closes #14317 from liancheng/py-examples-update.
Diffstat (limited to 'examples/src/main/scala')
-rw-r--r--examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala b/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
index e897c2d066..11e84c0e45 100644
--- a/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
@@ -87,7 +87,7 @@ object SparkHiveExample {
// |Key: 0, Value: val_0|
// ...
- // You can also use DataFrames to create temporary views within a HiveContext.
+ // You can also use DataFrames to create temporary views within a SparkSession.
val recordsDF = spark.createDataFrame((1 to 100).map(i => Record(i, s"val_$i")))
recordsDF.createOrReplaceTempView("records")
@@ -97,8 +97,8 @@ object SparkHiveExample {
// |key| value|key| value|
// +---+------+---+------+
// | 2| val_2| 2| val_2|
- // | 2| val_2| 2| val_2|
// | 4| val_4| 4| val_4|
+ // | 5| val_5| 5| val_5|
// ...
// $example off:spark_hive$