aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorsandy <phalodi@gmail.com>2016-08-16 12:50:55 -0700
committerReynold Xin <rxin@databricks.com>2016-08-16 12:50:55 -0700
commite28a8c5899c48ff065e2fd3bb6b10c82b4d39c2c (patch)
tree75d7b40a2e9fd860792b7a30fa8d92f79fac255e /docs
parentc34b546d674ce186f13d9999b97977bc281cfedf (diff)
downloadspark-e28a8c5899c48ff065e2fd3bb6b10c82b4d39c2c.tar.gz
spark-e28a8c5899c48ff065e2fd3bb6b10c82b4d39c2c.tar.bz2
spark-e28a8c5899c48ff065e2fd3bb6b10c82b4d39c2c.zip
[SPARK-17089][DOCS] Remove api doc link for mapReduceTriplets operator
## What changes were proposed in this pull request? Remove the api doc link for mapReduceTriplets operator because in latest api they are remove so when user link to that api they will not get mapReduceTriplets there so its more good to remove than confuse the user. ## How was this patch tested? Run all the test cases ![screenshot from 2016-08-16 23-08-25](https://cloud.githubusercontent.com/assets/8075390/17709393/8cfbf75a-6406-11e6-98e6-38f7b319d833.png) Author: sandy <phalodi@gmail.com> Closes #14669 from phalodi/SPARK-17089.
Diffstat (limited to 'docs')
-rw-r--r--docs/graphx-programming-guide.md5
1 files changed, 2 insertions, 3 deletions
diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 6f738f0599..58671e6f14 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -24,7 +24,6 @@ description: GraphX graph processing library guide for Spark SPARK_VERSION_SHORT
[Graph.outerJoinVertices]: api/scala/index.html#org.apache.spark.graphx.Graph@outerJoinVertices[U,VD2](RDD[(VertexId,U)])((VertexId,VD,Option[U])⇒VD2)(ClassTag[U],ClassTag[VD2]):Graph[VD2,ED]
[Graph.aggregateMessages]: api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])⇒Unit,(A,A)⇒A,TripletFields)(ClassTag[A]):VertexRDD[A]
[EdgeContext]: api/scala/index.html#org.apache.spark.graphx.EdgeContext
-[Graph.mapReduceTriplets]: api/scala/index.html#org.apache.spark.graphx.Graph@mapReduceTriplets[A](mapFunc:org.apache.spark.graphx.EdgeTriplet[VD,ED]=&gt;Iterator[(org.apache.spark.graphx.VertexId,A)],reduceFunc:(A,A)=&gt;A,activeSetOpt:Option[(org.apache.spark.graphx.VertexRDD[_],org.apache.spark.graphx.EdgeDirection)])(implicitevidence$10:scala.reflect.ClassTag[A]):org.apache.spark.graphx.VertexRDD[A]
[GraphOps.collectNeighborIds]: api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighborIds(EdgeDirection):VertexRDD[Array[VertexId]]
[GraphOps.collectNeighbors]: api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighbors(EdgeDirection):VertexRDD[Array[(VertexId,VD)]]
[RDD Persistence]: programming-guide.html#rdd-persistence
@@ -596,7 +595,7 @@ compute the average age of the more senior followers of each user.
### Map Reduce Triplets Transition Guide (Legacy)
In earlier versions of GraphX neighborhood aggregation was accomplished using the
-[`mapReduceTriplets`][Graph.mapReduceTriplets] operator:
+`mapReduceTriplets` operator:
{% highlight scala %}
class Graph[VD, ED] {
@@ -607,7 +606,7 @@ class Graph[VD, ED] {
}
{% endhighlight %}
-The [`mapReduceTriplets`][Graph.mapReduceTriplets] operator takes a user defined map function which
+The `mapReduceTriplets` operator takes a user defined map function which
is applied to each triplet and can yield *messages* which are aggregated using the user defined
`reduce` function.
However, we found the user of the returned iterator to be expensive and it inhibited our ability to