diff options
author | hyukjinkwon <gurwls223@gmail.com> | 2016-11-22 11:40:18 +0000 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2016-11-22 11:40:18 +0000 |
commit | 933a6548d423cf17448207a99299cf36fc1a95f6 (patch) | |
tree | 8244d8b993bce2cb023d0ad9dcaf037f34cb7378 /python/pyspark/ml/feature.py | |
parent | 4922f9cdcac8b7c10320ac1fb701997fffa45d46 (diff) | |
download | spark-933a6548d423cf17448207a99299cf36fc1a95f6.tar.gz spark-933a6548d423cf17448207a99299cf36fc1a95f6.tar.bz2 spark-933a6548d423cf17448207a99299cf36fc1a95f6.zip |
[SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation
## What changes were proposed in this pull request?
It seems in Python, there are
- `Note:`
- `NOTE:`
- `Note that`
- `.. note::`
This PR proposes to fix those to `.. note::` to be consistent.
**Before**
<img width="567" alt="2016-11-21 1 18 49" src="https://cloud.githubusercontent.com/assets/6477701/20464305/85144c86-af88-11e6-8ee9-90f584dd856c.png">
<img width="617" alt="2016-11-21 12 42 43" src="https://cloud.githubusercontent.com/assets/6477701/20464263/27be5022-af88-11e6-8577-4bbca7cdf36c.png">
**After**
<img width="554" alt="2016-11-21 1 18 42" src="https://cloud.githubusercontent.com/assets/6477701/20464306/8fe48932-af88-11e6-83e1-fc3cbf74407d.png">
<img width="628" alt="2016-11-21 12 42 51" src="https://cloud.githubusercontent.com/assets/6477701/20464264/2d3e156e-af88-11e6-93f3-cab8d8d02983.png">
## How was this patch tested?
The notes were found via
```bash
grep -r "Note: " .
grep -r "NOTE: " .
grep -r "Note that " .
```
And then fixed one by one comparing with API documentation.
After that, manually tested via `make html` under `./python/docs`.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #15947 from HyukjinKwon/SPARK-18447.
Diffstat (limited to 'python/pyspark/ml/feature.py')
-rwxr-xr-x | python/pyspark/ml/feature.py | 13 |
1 files changed, 7 insertions, 6 deletions
diff --git a/python/pyspark/ml/feature.py b/python/pyspark/ml/feature.py index 635cf13045..40b63d4d31 100755 --- a/python/pyspark/ml/feature.py +++ b/python/pyspark/ml/feature.py @@ -742,8 +742,8 @@ class MinMaxScaler(JavaEstimator, HasInputCol, HasOutputCol, JavaMLReadable, Jav For the case E_max == E_min, Rescaled(e_i) = 0.5 * (max + min) - Note that since zero values will probably be transformed to non-zero values, output of the - transformer will be DenseVector even for sparse input. + .. note:: Since zero values will probably be transformed to non-zero values, output of the + transformer will be DenseVector even for sparse input. >>> from pyspark.ml.linalg import Vectors >>> df = spark.createDataFrame([(Vectors.dense([0.0]),), (Vectors.dense([2.0]),)], ["a"]) @@ -1014,9 +1014,9 @@ class OneHotEncoder(JavaTransformer, HasInputCol, HasOutputCol, JavaMLReadable, :py:attr:`dropLast`) because it makes the vector entries sum up to one, and hence linearly dependent. So an input value of 4.0 maps to `[0.0, 0.0, 0.0, 0.0]`. - Note that this is different from scikit-learn's OneHotEncoder, - which keeps all categories. - The output vectors are sparse. + + .. note:: This is different from scikit-learn's OneHotEncoder, + which keeps all categories. The output vectors are sparse. .. seealso:: @@ -1698,7 +1698,8 @@ class IndexToString(JavaTransformer, HasInputCol, HasOutputCol, JavaMLReadable, class StopWordsRemover(JavaTransformer, HasInputCol, HasOutputCol, JavaMLReadable, JavaMLWritable): """ A feature transformer that filters out stop words from input. - Note: null values from input array are preserved unless adding null to stopWords explicitly. + + .. note:: null values from input array are preserved unless adding null to stopWords explicitly. >>> df = spark.createDataFrame([(["a", "b", "c"],)], ["text"]) >>> remover = StopWordsRemover(inputCol="text", outputCol="words", stopWords=["b"]) |