aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorhyukjinkwon <gurwls223@gmail.com>2017-04-12 09:16:39 +0100
committerSean Owen <sowen@cloudera.com>2017-04-12 09:16:39 +0100
commitbca4259f12b32eeb156b6755d0ec5e16d8e566b3 (patch)
treec7da055477f7498b6efebc60fe7a7f9a0c6ea353 /docs
parentb9384382484a9f5c6b389742e7fdf63865de81c0 (diff)
downloadspark-bca4259f12b32eeb156b6755d0ec5e16d8e566b3.tar.gz
spark-bca4259f12b32eeb156b6755d0ec5e16d8e566b3.tar.bz2
spark-bca4259f12b32eeb156b6755d0ec5e16d8e566b3.zip
[MINOR][DOCS] JSON APIs related documentation fixes
## What changes were proposed in this pull request? This PR proposes corrections related to JSON APIs as below: - Rendering links in Python documentation - Replacing `RDD` to `Dataset` in programing guide - Adding missing description about JSON Lines consistently in `DataFrameReader.json` in Python API - De-duplicating little bit of `DataFrameReader.json` in Scala/Java API ## How was this patch tested? Manually build the documentation via `jekyll build`. Corresponding snapstops will be left on the codes. Note that currently there are Javadoc8 breaks in several places. These are proposed to be handled in https://github.com/apache/spark/pull/17477. So, this PR does not fix those. Author: hyukjinkwon <gurwls223@gmail.com> Closes #17602 from HyukjinKwon/minor-json-documentation.
Diffstat (limited to 'docs')
-rw-r--r--docs/sql-programming-guide.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index c425faca4c..28942b68fa 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -883,7 +883,7 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
<div data-lang="scala" markdown="1">
Spark SQL can automatically infer the schema of a JSON dataset and load it as a `Dataset[Row]`.
-This conversion can be done using `SparkSession.read.json()` on either an RDD of String,
+This conversion can be done using `SparkSession.read.json()` on either a `Dataset[String]`,
or a JSON file.
Note that the file that is offered as _a json file_ is not a typical JSON file. Each
@@ -897,7 +897,7 @@ For a regular multi-line JSON file, set the `wholeFile` option to `true`.
<div data-lang="java" markdown="1">
Spark SQL can automatically infer the schema of a JSON dataset and load it as a `Dataset<Row>`.
-This conversion can be done using `SparkSession.read().json()` on either an RDD of String,
+This conversion can be done using `SparkSession.read().json()` on either a `Dataset<String>`,
or a JSON file.
Note that the file that is offered as _a json file_ is not a typical JSON file. Each