diff options
author | Anatoli Fomenko <fa@apache.org> | 2014-06-16 23:10:36 -0700 |
---|---|---|
committer | Xiangrui Meng <meng@databricks.com> | 2014-06-16 23:10:36 -0700 |
commit | 7afa912e747c77ebfd10bddf7bda2e3190fdeb9c (patch) | |
tree | c1032f139f1f14f95684375f62b80b37b12ffa5e | |
parent | 237b96bc59ab1b54c31d06a5260cd77e1eb96116 (diff) | |
download | spark-7afa912e747c77ebfd10bddf7bda2e3190fdeb9c.tar.gz spark-7afa912e747c77ebfd10bddf7bda2e3190fdeb9c.tar.bz2 spark-7afa912e747c77ebfd10bddf7bda2e3190fdeb9c.zip |
MLlib documentation fix
Synchronized mllib-optimization.md with Spark Scaladoc: removed reference to GradientDescent.runMiniBatchSGD method
This is a temporary fix to remove a link from http://spark.apache.org/docs/latest/mllib-optimization.html to GradientDescent.runMiniBatchSGD which is not in the current online GradientDescent Scaladoc.
FIXME: revert this commit after GradientDescent Scaladoc is updated.
See images for details.
![mllib-docs-fix-1](https://cloud.githubusercontent.com/assets/1375501/3294410/ccf19bb8-f5a8-11e3-93f1-f593016209eb.png)
![mllib-docs-fix-2](https://cloud.githubusercontent.com/assets/1375501/3294411/d0b59a7e-f5a8-11e3-8fc8-329c177ef8c8.png)
Author: Anatoli Fomenko <fa@apache.org>
Closes #1098 from afomenko/master and squashes the following commits:
5cb0758 [Anatoli Fomenko] MLlib documentation fix
-rw-r--r-- | docs/mllib-optimization.md | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/docs/mllib-optimization.md b/docs/mllib-optimization.md index 97e8f4e966..ae9ede58e8 100644 --- a/docs/mllib-optimization.md +++ b/docs/mllib-optimization.md @@ -147,9 +147,9 @@ are developed, see the <a href="mllib-linear-methods.html">linear methods</a> section for example. -The SGD method -[GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) -has the following parameters: +The SGD class +[GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) +sets the following parameters: * `Gradient` is a class that computes the stochastic gradient of the function being optimized, i.e., with respect to a single training example, at the @@ -171,7 +171,7 @@ each iteration, to compute the gradient direction. Available algorithms for gradient descent: -* [GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) +* [GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) ### L-BFGS L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various |