aboutsummaryrefslogtreecommitdiff
path: root/docs/programming-guide.md
diff options
context:
space:
mode:
authorDice <poleon.kd@gmail.com>2015-05-19 18:12:05 +0100
committerSean Owen <sowen@cloudera.com>2015-05-19 18:13:09 +0100
commit32fa611b19c6b95d4563be631c5a8ff0cdf3438f (patch)
tree5e652f77d72cc907493f947cef2e14fe3f282666 /docs/programming-guide.md
parent6845cb2ff475fd794b30b01af5ebc80714b880f0 (diff)
downloadspark-32fa611b19c6b95d4563be631c5a8ff0cdf3438f.tar.gz
spark-32fa611b19c6b95d4563be631c5a8ff0cdf3438f.tar.bz2
spark-32fa611b19c6b95d4563be631c5a8ff0cdf3438f.zip
[SPARK-7704] Updating Programming Guides per SPARK-4397
The change per SPARK-4397 makes implicit objects in SparkContext to be found by the compiler automatically. So that we don't need to import the o.a.s.SparkContext._ explicitly any more and can remove some statements around the "implicit conversions" from the latest Programming Guides (1.3.0 and higher) Author: Dice <poleon.kd@gmail.com> Closes #6234 from daisukebe/patch-1 and squashes the following commits: b77ecd9 [Dice] fix a typo 45dfcd3 [Dice] rewording per Sean's advice a094bcf [Dice] Adding a note for users on any previous releases a29be5f [Dice] Updating Programming Guides per SPARK-4397
Diffstat (limited to 'docs/programming-guide.md')
-rw-r--r--docs/programming-guide.md11
1 files changed, 5 insertions, 6 deletions
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index 0c273769bb..07a4d29fe7 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -41,14 +41,15 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency
artifactId = hadoop-client
version = <your-hdfs-version>
-Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines:
+Finally, you need to import some Spark classes into your program. Add the following lines:
{% highlight scala %}
import org.apache.spark.SparkContext
-import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
{% endhighlight %}
+(Before Spark 1.3.0, you need to explicitly `import org.apache.spark.SparkContext._` to enable essential implicit conversions.)
+
</div>
<div data-lang="java" markdown="1">
@@ -821,11 +822,9 @@ by a key.
In Scala, these operations are automatically available on RDDs containing
[Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2) objects
-(the built-in tuples in the language, created by simply writing `(a, b)`), as long as you
-import `org.apache.spark.SparkContext._` in your program to enable Spark's implicit
-conversions. The key-value pair operations are available in the
+(the built-in tuples in the language, created by simply writing `(a, b)`). The key-value pair operations are available in the
[PairRDDFunctions](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions) class,
-which automatically wraps around an RDD of tuples if you import the conversions.
+which automatically wraps around an RDD of tuples.
For example, the following code uses the `reduceByKey` operation on key-value pairs to count how
many times each line of text occurs in a file: