diff options
author | hyukjinkwon <gurwls223@gmail.com> | 2017-01-17 12:28:15 +0000 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2017-01-17 12:28:15 +0000 |
commit | 6c00c069e3c3f5904abd122cea1d56683031cca0 (patch) | |
tree | 3d7a64d4cbedd007a217d0387919ce1fe72a0d2e /graphx | |
parent | 0019005a2d0f150fd00ad926d054a8beca4bbd68 (diff) | |
download | spark-6c00c069e3c3f5904abd122cea1d56683031cca0.tar.gz spark-6c00c069e3c3f5904abd122cea1d56683031cca0.tar.bz2 spark-6c00c069e3c3f5904abd122cea1d56683031cca0.zip |
[SPARK-3249][DOC] Fix links in ScalaDoc that cause warning messages in `sbt/sbt unidoc`
## What changes were proposed in this pull request?
This PR proposes to fix ambiguous link warnings by simply making them as code blocks for both javadoc and scaladoc.
```
[warn] .../spark/core/src/main/scala/org/apache/spark/Accumulator.scala:20: The link target "SparkContext#accumulator" is ambiguous. Several members fit the target:
[warn] .../spark/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala:281: The link target "runMiniBatchSGD" is ambiguous. Several members fit the target:
[warn] .../spark/mllib/src/main/scala/org/apache/spark/mllib/fpm/AssociationRules.scala:83: The link target "run" is ambiguous. Several members fit the target:
...
```
This PR also fixes javadoc8 break as below:
```
[error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found
[error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product}
[error] ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found
[error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product}
[error] ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found
[error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product}
[error] ^
[info] 3 errors
```
## How was this patch tested?
Manually via `sbt unidoc > output.txt` and the checked it via `cat output.txt | grep ambiguous`
and `sbt unidoc | grep error`.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #16604 from HyukjinKwon/SPARK-3249.
Diffstat (limited to 'graphx')
-rw-r--r-- | graphx/src/main/scala/org/apache/spark/graphx/Graph.scala | 2 | ||||
-rw-r--r-- | graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala | 4 |
2 files changed, 3 insertions, 3 deletions
diff --git a/graphx/src/main/scala/org/apache/spark/graphx/Graph.scala b/graphx/src/main/scala/org/apache/spark/graphx/Graph.scala index c55a5885ba..b3a3420b84 100644 --- a/graphx/src/main/scala/org/apache/spark/graphx/Graph.scala +++ b/graphx/src/main/scala/org/apache/spark/graphx/Graph.scala @@ -331,7 +331,7 @@ abstract class Graph[VD: ClassTag, ED: ClassTag] protected () extends Serializab /** * Merges multiple edges between two vertices into a single edge. For correct results, the graph - * must have been partitioned using [[partitionBy]]. + * must have been partitioned using `partitionBy`. * * @param merge the user-supplied commutative associative function to merge edge attributes * for duplicate edges. diff --git a/graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala b/graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala index 90907300be..475bccf9bf 100644 --- a/graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala +++ b/graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala @@ -428,7 +428,7 @@ class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends Seriali * Compute the connected component membership of each vertex and return a graph with the vertex * value containing the lowest vertex id in the connected component containing that vertex. * - * @see [[org.apache.spark.graphx.lib.ConnectedComponents$#run]] + * @see `org.apache.spark.graphx.lib.ConnectedComponents.run` */ def connectedComponents(): Graph[VertexId, ED] = { ConnectedComponents.run(graph) @@ -438,7 +438,7 @@ class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends Seriali * Compute the connected component membership of each vertex and return a graph with the vertex * value containing the lowest vertex id in the connected component containing that vertex. * - * @see [[org.apache.spark.graphx.lib.ConnectedComponents$#run]] + * @see `org.apache.spark.graphx.lib.ConnectedComponents.run` */ def connectedComponents(maxIterations: Int): Graph[VertexId, ED] = { ConnectedComponents.run(graph, maxIterations) |