aboutsummaryrefslogtreecommitdiff
path: root/docs/mllib-optimization.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/mllib-optimization.md')
-rw-r--r--docs/mllib-optimization.md8
1 files changed, 4 insertions, 4 deletions
diff --git a/docs/mllib-optimization.md b/docs/mllib-optimization.md
index 97e8f4e966..ae9ede58e8 100644
--- a/docs/mllib-optimization.md
+++ b/docs/mllib-optimization.md
@@ -147,9 +147,9 @@ are developed, see the
<a href="mllib-linear-methods.html">linear methods</a>
section for example.
-The SGD method
-[GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
-has the following parameters:
+The SGD class
+[GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
+sets the following parameters:
* `Gradient` is a class that computes the stochastic gradient of the function
being optimized, i.e., with respect to a single training example, at the
@@ -171,7 +171,7 @@ each iteration, to compute the gradient direction.
Available algorithms for gradient descent:
-* [GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
+* [GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
### L-BFGS
L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various