diff options
author | Sean Owen <sowen@cloudera.com> | 2014-05-06 20:07:22 -0700 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2014-05-06 20:07:36 -0700 |
commit | 514ee93daf20ee0778a5894042b294c437a80ed8 (patch) | |
tree | bba9a3ec197143a557f1efdba27e6a4b5d8f5b41 /docs/mllib-linear-methods.md | |
parent | 8cfebf5bf2ada0ba62b9b0000c8ef9f28fa6b267 (diff) | |
download | spark-514ee93daf20ee0778a5894042b294c437a80ed8.tar.gz spark-514ee93daf20ee0778a5894042b294c437a80ed8.tar.bz2 spark-514ee93daf20ee0778a5894042b294c437a80ed8.zip |
SPARK-1727. Correct small compile errors, typos, and markdown issues in (primarly) MLlib docs
While play-testing the Scala and Java code examples in the MLlib docs, I noticed a number of small compile errors, and some typos. This led to finding and fixing a few similar items in other docs.
Then in the course of building the site docs to check the result, I found a few small suggestions for the build instructions. I also found a few more formatting and markdown issues uncovered when I accidentally used maruku instead of kramdown.
Author: Sean Owen <sowen@cloudera.com>
Closes #653 from srowen/SPARK-1727 and squashes the following commits:
6e7c38a [Sean Owen] Final doc updates - one more compile error, and use of mean instead of sum and count
8f5e847 [Sean Owen] Fix markdown syntax issues that maruku flags, even though we use kramdown (but only those that do not affect kramdown's output)
99966a9 [Sean Owen] Update issue tracker URL in docs
23c9ac3 [Sean Owen] Add Scala Naive Bayes example, to use existing example data file (whose format needed a tweak)
8c81982 [Sean Owen] Fix small compile errors and typos across MLlib docs
(cherry picked from commit 25ad8f93012730115a8a1fac649fe3e842c045b3)
Signed-off-by: Patrick Wendell <pwendell@gmail.com>
Diffstat (limited to 'docs/mllib-linear-methods.md')
-rw-r--r-- | docs/mllib-linear-methods.md | 13 |
1 files changed, 7 insertions, 6 deletions
diff --git a/docs/mllib-linear-methods.md b/docs/mllib-linear-methods.md index ebb555f974..40b7a7f807 100644 --- a/docs/mllib-linear-methods.md +++ b/docs/mllib-linear-methods.md @@ -63,7 +63,7 @@ methods MLlib supports: <tbody> <tr> <td>hinge loss</td><td>$\max \{0, 1-y \wv^T \x \}, \quad y \in \{-1, +1\}$</td> - <td>$\begin{cases}-y \cdot \x & \text{if $y \wv^T \x <1$}, \\ 0 & + <td>$\begin{cases}-y \cdot \x & \text{if $y \wv^T \x <1$}, \\ 0 & \text{otherwise}.\end{cases}$</td> </tr> <tr> @@ -225,10 +225,11 @@ algorithm for 200 iterations. import org.apache.spark.mllib.optimization.L1Updater val svmAlg = new SVMWithSGD() -svmAlg.optimizer.setNumIterations(200) - .setRegParam(0.1) - .setUpdater(new L1Updater) -val modelL1 = svmAlg.run(parsedData) +svmAlg.optimizer. + setNumIterations(200). + setRegParam(0.1). + setUpdater(new L1Updater) +val modelL1 = svmAlg.run(training) {% endhighlight %} Similarly, you can use replace `SVMWithSGD` by @@ -322,7 +323,7 @@ val valuesAndPreds = parsedData.map { point => val prediction = model.predict(point.features) (point.label, prediction) } -val MSE = valuesAndPreds.map{case(v, p) => math.pow((v - p), 2)}.reduce(_ + _) / valuesAndPreds.count +val MSE = valuesAndPreds.map{case(v, p) => math.pow((v - p), 2)}.mean() println("training Mean Squared Error = " + MSE) {% endhighlight %} |