diff options
author | Xin Ren <iamshrek@126.com> | 2015-10-07 15:00:19 +0100 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2015-10-07 15:00:19 +0100 |
commit | 27cdde2ff87346fb54318532a476bf85f5837da7 (patch) | |
tree | a03cd037bae9a3bec8d13bfc43d33a82eeb6454b /docs/mllib-collaborative-filtering.md | |
parent | ffe6831e49e28eb855f857fdfa5dd99341e80c9d (diff) | |
download | spark-27cdde2ff87346fb54318532a476bf85f5837da7.tar.gz spark-27cdde2ff87346fb54318532a476bf85f5837da7.tar.bz2 spark-27cdde2ff87346fb54318532a476bf85f5837da7.zip |
[SPARK-10669] [DOCS] Link to each language's API in codetabs in ML docs: spark.mllib
In the Markdown docs for the spark.mllib Programming Guide, we have code examples with codetabs for each language. We should link to each language's API docs within the corresponding codetab, but we are inconsistent about this. For an example of what we want to do, see the "ChiSqSelector" section in https://github.com/apache/spark/blob/64743870f23bffb8d96dcc8a0181c1452782a151/docs/mllib-feature-extraction.md
This JIRA is just for spark.mllib, not spark.ml.
Please let me know if more work is needed, thanks a lot.
Author: Xin Ren <iamshrek@126.com>
Closes #8977 from keypointt/SPARK-10669.
Diffstat (limited to 'docs/mllib-collaborative-filtering.md')
-rw-r--r-- | docs/mllib-collaborative-filtering.md | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/docs/mllib-collaborative-filtering.md b/docs/mllib-collaborative-filtering.md index eedc23424a..b3fd51dca5 100644 --- a/docs/mllib-collaborative-filtering.md +++ b/docs/mllib-collaborative-filtering.md @@ -64,6 +64,8 @@ We use the default [ALS.train()](api/scala/index.html#org.apache.spark.mllib.rec method which assumes ratings are explicit. We evaluate the recommendation model by measuring the Mean Squared Error of rating prediction. +Refer to the [`ALS` Scala docs](api/scala/index.html#org.apache.spark.mllib.recommendation.ALS) for details on the API. + {% highlight scala %} import org.apache.spark.mllib.recommendation.ALS import org.apache.spark.mllib.recommendation.MatrixFactorizationModel @@ -119,6 +121,8 @@ Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a calling `.rdd()` on your `JavaRDD` object. A self-contained application example that is equivalent to the provided example in Scala is given bellow: +Refer to the [`ALS` Java docs](api/java/org/apache/spark/mllib/recommendation/ALS.html) for details on the API. + {% highlight java %} import scala.Tuple2; @@ -201,6 +205,8 @@ In the following example we load rating data. Each row consists of a user, a pro We use the default ALS.train() method which assumes ratings are explicit. We evaluate the recommendation by measuring the Mean Squared Error of rating prediction. +Refer to the [`ALS` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.recommendation.ALS) for more details on the API. + {% highlight python %} from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating |