aboutsummaryrefslogtreecommitdiff
path: root/docs/programming-guide.md
diff options
context:
space:
mode:
authorSandeep Singh <sandeep@techaddict.me>2016-05-03 12:38:21 +0100
committerSean Owen <sowen@cloudera.com>2016-05-03 12:38:21 +0100
commitdfd9723dd3b3ff5d47a7f04a4330bf33ffe353ac (patch)
treed82f54527ca75689e0cc4d7df5917f136b65f121 /docs/programming-guide.md
parentf10ae4b1e169495af11b8e8123c60dd96174477e (diff)
downloadspark-dfd9723dd3b3ff5d47a7f04a4330bf33ffe353ac.tar.gz
spark-dfd9723dd3b3ff5d47a7f04a4330bf33ffe353ac.tar.bz2
spark-dfd9723dd3b3ff5d47a7f04a4330bf33ffe353ac.zip
[MINOR][DOCS] Fix type Information in Quick Start and Programming Guide
Author: Sandeep Singh <sandeep@techaddict.me> Closes #12841 from techaddict/improve_docs_1.
Diffstat (limited to 'docs/programming-guide.md')
-rw-r--r--docs/programming-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index cf6f1d8914..d375926a91 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -328,7 +328,7 @@ Text file RDDs can be created using `SparkContext`'s `textFile` method. This met
{% highlight scala %}
scala> val distFile = sc.textFile("data.txt")
-distFile: RDD[String] = MappedRDD@1d4cee08
+distFile: org.apache.spark.rdd.RDD[String] = data.txt MapPartitionsRDD[10] at textFile at <console>:26
{% endhighlight %}
Once created, `distFile` can be acted on by dataset operations. For example, we can add up the sizes of all the lines using the `map` and `reduce` operations as follows: `distFile.map(s => s.length).reduce((a, b) => a + b)`.