aboutsummaryrefslogtreecommitdiff
path: root/docs/ml-guide.md
diff options
context:
space:
mode:
authorBenFradet <benjamin.fradet@gmail.com>2016-01-06 12:01:05 -0800
committerJoseph K. Bradley <joseph@databricks.com>2016-01-06 12:01:05 -0800
commitf82ebb15224ec5375f25f67d598ec3ef1cb65210 (patch)
tree3edc4fffa943a8298f71105ff15264317dfe9384 /docs/ml-guide.md
parentfcd013cf70e7890aa25a8fe3cb6c8b36bf0e1f04 (diff)
downloadspark-f82ebb15224ec5375f25f67d598ec3ef1cb65210.tar.gz
spark-f82ebb15224ec5375f25f67d598ec3ef1cb65210.tar.bz2
spark-f82ebb15224ec5375f25f67d598ec3ef1cb65210.zip
[SPARK-12368][ML][DOC] Better doc for the binary classification evaluator' metricName
For the BinaryClassificationEvaluator, the scaladoc doesn't mention that "areaUnderPR" is supported, only that the default is "areadUnderROC". Also, in the documentation, it is said that: "The default metric used to choose the best ParamMap can be overriden by the setMetric method in each of these evaluators." However, the method is called setMetricName. This PR aims to fix both issues. Author: BenFradet <benjamin.fradet@gmail.com> Closes #10328 from BenFradet/SPARK-12368.
Diffstat (limited to 'docs/ml-guide.md')
-rw-r--r--docs/ml-guide.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/ml-guide.md b/docs/ml-guide.md
index 44a316a07d..1343753bce 100644
--- a/docs/ml-guide.md
+++ b/docs/ml-guide.md
@@ -628,7 +628,7 @@ Currently, `spark.ml` supports model selection using the [`CrossValidator`](api/
The `Evaluator` can be a [`RegressionEvaluator`](api/scala/index.html#org.apache.spark.ml.evaluation.RegressionEvaluator)
for regression problems, a [`BinaryClassificationEvaluator`](api/scala/index.html#org.apache.spark.ml.evaluation.BinaryClassificationEvaluator)
for binary data, or a [`MultiClassClassificationEvaluator`](api/scala/index.html#org.apache.spark.ml.evaluation.MultiClassClassificationEvaluator)
-for multiclass problems. The default metric used to choose the best `ParamMap` can be overriden by the `setMetric`
+for multiclass problems. The default metric used to choose the best `ParamMap` can be overriden by the `setMetricName`
method in each of these evaluators.
The `ParamMap` which produces the best evaluation metric (averaged over the `$k$` folds) is selected as the best model.
@@ -951,4 +951,4 @@ model.transform(test)
{% endhighlight %}
</div>
-</div> \ No newline at end of file
+</div>