aboutsummaryrefslogtreecommitdiff
path: root/docs/sql-programming-guide.md
diff options
context:
space:
mode:
authorGrega Kespret <grega.kespret@gmail.com>2014-09-22 10:13:44 -0700
committerMichael Armbrust <michael@databricks.com>2014-09-22 10:13:44 -0700
commit56dae30ca70489a62686cb245728b09b2179bb5a (patch)
tree9a861be8ad7917519782f223a1ae0840c0eb378f /docs/sql-programming-guide.md
parentfec921552ffccc36937214406b3e4a050eb0d8e0 (diff)
downloadspark-56dae30ca70489a62686cb245728b09b2179bb5a.tar.gz
spark-56dae30ca70489a62686cb245728b09b2179bb5a.tar.bz2
spark-56dae30ca70489a62686cb245728b09b2179bb5a.zip
Update docs to use jsonRDD instead of wrong jsonRdd.
Author: Grega Kespret <grega.kespret@gmail.com> Closes #2479 from gregakespret/patch-1 and squashes the following commits: dd6b90a [Grega Kespret] Update docs to use jsonRDD instead of wrong jsonRdd.
Diffstat (limited to 'docs/sql-programming-guide.md')
-rw-r--r--docs/sql-programming-guide.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 5212e19c41..c1f80544bf 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -605,7 +605,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a SQLContext:
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
-* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
+* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight scala %}
// sc is an existing SparkContext.
@@ -643,7 +643,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a JavaSQLContext :
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
-* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
+* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight java %}
// sc is an existing JavaSparkContext.
@@ -681,7 +681,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a SQLContext:
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
-* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
+* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight python %}
# sc is an existing SparkContext.