diff options
author | Yanbo Liang <ybliang8@gmail.com> | 2016-02-29 00:55:51 -0800 |
---|---|---|
committer | DB Tsai <dbt@netflix.com> | 2016-02-29 00:55:51 -0800 |
commit | d81a71357e24160244b6eeff028b0d9a4863becf (patch) | |
tree | 0d5f6bdde7ce4edbe45883a908d80ab292845eb6 /sbt | |
parent | dd3b5455c61bddce96a94c2ce8f5d76ed4948ea1 (diff) | |
download | spark-d81a71357e24160244b6eeff028b0d9a4863becf.tar.gz spark-d81a71357e24160244b6eeff028b0d9a4863becf.tar.bz2 spark-d81a71357e24160244b6eeff028b0d9a4863becf.zip |
[SPARK-13545][MLLIB][PYSPARK] Make MLlib LogisticRegressionWithLBFGS's default parameters consistent in Scala and Python
## What changes were proposed in this pull request?
* The default value of ```regParam``` of PySpark MLlib ```LogisticRegressionWithLBFGS``` should be consistent with Scala which is ```0.0```. (This is also consistent with ML ```LogisticRegression```.)
* BTW, if we use a known updater(L1 or L2) for binary classification, ```LogisticRegressionWithLBFGS``` will call the ML implementation. We should update the API doc to clarifying ```numCorrections``` will have no effect if we fall into that route.
* Make a pass for all parameters of ```LogisticRegressionWithLBFGS```, others are set properly.
cc mengxr dbtsai
## How was this patch tested?
No new tests, it should pass all current tests.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #11424 from yanboliang/spark-13545.
Diffstat (limited to 'sbt')
0 files changed, 0 insertions, 0 deletions