aboutsummaryrefslogtreecommitdiff
path: root/docs/sql-programming-guide.md
diff options
context:
space:
mode:
authorTommy YU <tummyyu@163.com>2016-10-18 21:15:32 -0700
committerReynold Xin <rxin@databricks.com>2016-10-18 21:15:32 -0700
commitf39852e59883c214b0d007faffb406570ea3084b (patch)
tree054003b676967cee274085797f3e470808e5b181 /docs/sql-programming-guide.md
parent4329c5cea4d235dc582fdb7cbdb822f62e650f5d (diff)
downloadspark-f39852e59883c214b0d007faffb406570ea3084b.tar.gz
spark-f39852e59883c214b0d007faffb406570ea3084b.tar.bz2
spark-f39852e59883c214b0d007faffb406570ea3084b.zip
[SPARK-18001][DOCUMENT] fix broke link to SparkDataFrame
## What changes were proposed in this pull request? In http://spark.apache.org/docs/latest/sql-programming-guide.html, Section "Untyped Dataset Operations (aka DataFrame Operations)" Link to R DataFrame doesn't work that return The requested URL /docs/latest/api/R/DataFrame.html was not found on this server. Correct link is SparkDataFrame.html for spark 2.0 ## How was this patch tested? Manual checked. Author: Tommy YU <tummyyu@163.com> Closes #15543 from Wenpei/spark-18001.
Diffstat (limited to 'docs/sql-programming-guide.md')
-rw-r--r--docs/sql-programming-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 3f1b73a830..d334a86bc7 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -140,7 +140,7 @@ As an example, the following creates a DataFrame based on the content of a JSON
## Untyped Dataset Operations (aka DataFrame Operations)
-DataFrames provide a domain-specific language for structured data manipulation in [Scala](api/scala/index.html#org.apache.spark.sql.Dataset), [Java](api/java/index.html?org/apache/spark/sql/Dataset.html), [Python](api/python/pyspark.sql.html#pyspark.sql.DataFrame) and [R](api/R/DataFrame.html).
+DataFrames provide a domain-specific language for structured data manipulation in [Scala](api/scala/index.html#org.apache.spark.sql.Dataset), [Java](api/java/index.html?org/apache/spark/sql/Dataset.html), [Python](api/python/pyspark.sql.html#pyspark.sql.DataFrame) and [R](api/R/SparkDataFrame.html).
As mentioned above, in Spark 2.0, DataFrames are just Dataset of `Row`s in Scala and Java API. These operations are also referred as "untyped transformations" in contrast to "typed transformations" come with strongly typed Scala/Java Datasets.