| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Code is equivalent, but map comprehency is most of the time faster than a map.
Author: Gaetan Semet <gaetan@xeberon.net>
Closes #14863 from Stibbons/map_comprehension.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
withMean=True
## What changes were proposed in this pull request?
Allow centering / mean scaling of sparse vectors in StandardScaler, if requested. This is for compatibility with `VectorAssembler` in common usages.
## How was this patch tested?
Jenkins tests, including new caes to reflect the new behavior.
Author: Sean Owen <sowen@cloudera.com>
Closes #14663 from srowen/SPARK-17001.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Improve example outputs to better reflect the functionality that is being presented. This mostly consisted of modifying what was printed at the end of the example, such as calling show() with truncate=False, but sometimes required minor tweaks in the example data to get relevant output. Explicitly set parameters when they are used as part of the example. Fixed Java examples that failed to run because of using old-style MLlib Vectors or problem with schema. Synced examples between different APIs.
## How was this patch tested?
Ran each example for Scala, Python, and Java and made sure output was legible on a terminal of width 100.
Author: Bryan Cutler <cutlerb@gmail.com>
Closes #14308 from BryanCutler/ml-examples-improve-output-SPARK-16260.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
This PR makes various minor updates to examples of all language bindings to make sure they are consistent with each other. Some typos and missing parts (JDBC example in Scala/Java/Python) are also fixed.
## How was this patch tested?
Manually tested.
Author: Cheng Lian <lian@databricks.com>
Closes #14368 from liancheng/revise-examples.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
language binding
This PR is based on PR #14098 authored by wangmiao1981.
## What changes were proposed in this pull request?
This PR replaces the original Python Spark SQL example file with the following three files:
- `sql/basic.py`
Demonstrates basic Spark SQL features.
- `sql/datasource.py`
Demonstrates various Spark SQL data sources.
- `sql/hive.py`
Demonstrates Spark SQL Hive interaction.
This PR also removes hard-coded Python example snippets in the SQL programming guide by extracting snippets from the above files using the `include_example` Liquid template tag.
## How was this patch tested?
Manually tested.
Author: wm624@hotmail.com <wm624@hotmail.com>
Author: Cheng Lian <lian@databricks.com>
Closes #14317 from liancheng/py-examples-update.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
The second arg in method `update()` is never used. So I delete it.
## How was this patch tested?
local run with `./bin/spark-submit examples/src/main/python/als.py`
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #14247 from zhengruifeng/als_refine.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
minor fixes
## What changes were proposed in this pull request?
Cleanup of examples, mostly from PySpark-ML to fix minor issues: unused imports, style consistency, pipeline_example is a duplicate, use future print funciton, and a spelling error.
* The "Pipeline Example" is duplicated by "Simple Text Classification Pipeline" in Scala, Python, and Java.
* "Estimator Transformer Param Example" is duplicated by "Simple Params Example" in Scala, Python and Java
* Synced random_forest_classifier_example.py with Scala by adding IndexToString label converted
* Synced train_validation_split.py (in Scala ModelSelectionViaTrainValidationExample) by adjusting data split, adding grid for intercept.
* RegexTokenizer was doing nothing in tokenizer_example.py and JavaTokenizerExample.java, synced with Scala version
## How was this patch tested?
local tests and run modified examples
Author: Bryan Cutler <cutlerb@gmail.com>
Closes #14081 from BryanCutler/examples-cleanup-SPARK-16403.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
A structured streaming example with event time windowing.
## How was this patch tested?
Run locally
Author: James Thomas <jamesjoethomas@gmail.com>
Closes #13957 from jjthomas/current.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
1). Remove unused import in Scala example;
2). Move spark session import outside example off;
3). Change parameter setting the same as Scala;
4). Change comment to be consistent;
5). Make sure that Scala and python using the same data set;
I did one pass and fixed the above issues. There are missing examples in python, which might be added later.
TODO: For some examples, there are comments on how to run examples; But there are many missing. We can add them later.
## How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
Manually test them
Author: wm624@hotmail.com <wm624@hotmail.com>
Closes #14021 from wangmiao1981/ann.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
This PR adds labelling support for the `include_example` Jekyll plugin, so that we may split a single source file into multiple line blocks with different labels, and include them in multiple code snippets in the generated HTML page.
## How was this patch tested?
Manually tested.
<img width="923" alt="screenshot at jun 29 19-53-21" src="https://cloud.githubusercontent.com/assets/230655/16451099/66a76db2-3e33-11e6-84fb-63104c2f0688.png">
Author: Cheng Lian <lian@databricks.com>
Closes #13972 from liancheng/include-example-with-labels.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Some appNames in ML examples are incorrect, mostly in PySpark but one in Scala. This corrects the names.
## How was this patch tested?
Style, local tests
Author: Bryan Cutler <cutlerb@gmail.com>
Closes #13949 from BryanCutler/pyspark-example-appNames-fix-SPARK-16261.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Network word count example for structured streaming
## How was this patch tested?
Run locally
Author: James Thomas <jamesjoethomas@gmail.com>
Author: James Thomas <jamesthomas@Jamess-MacBook-Pro.local>
Closes #13816 from jjthomas/master.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ML style vectors
## What changes were proposed in this pull request?
Need to convert ML Vectors to the old MLlib style before doing Statistics.colStats operations on the DataFrame
## How was this patch tested?
Ran example, local tests
Author: Bryan Cutler <cutlerb@gmail.com>
Closes #13928 from BryanCutler/pyspark-ml-example-vector-conv-SPARK-16231.
|
|
|
|
|
|
|
|
|
|
|
|
| |
and CountVectorizer
## What changes were proposed in this pull request?
Made changes to HashingTF,QuantileVectorizer and CountVectorizer
Author: GayathriMurali <gayathri.m@intel.com>
Closes #13745 from GayathriMurali/SPARK-15997.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regression
## What changes were proposed in this pull request?
add ml doc for ml isotonic regression
add scala example for ml isotonic regression
add java example for ml isotonic regression
add python example for ml isotonic regression
modify scala example for mllib isotonic regression
modify java example for mllib isotonic regression
modify python example for mllib isotonic regression
add data/mllib/sample_isotonic_regression_libsvm_data.txt
delete data/mllib/sample_isotonic_regression_data.txt
## How was this patch tested?
N/A
Author: WeichenXu <WeichenXu123@outlook.com>
Closes #13381 from WeichenXu123/add_isotonic_regression_doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Matrix APIs in the ML pipeline based algorithms
## What changes were proposed in this pull request?
This PR fixes Python examples to use the new ML Vector and Matrix APIs in the ML pipeline based algorithms.
I firstly executed this shell command, `grep -r "from pyspark.mllib" .` and then executed them all.
Some of tests in `ml` produced the error messages as below:
```
pyspark.sql.utils.IllegalArgumentException: u'requirement failed: Input type must be VectorUDT but got org.apache.spark.mllib.linalg.VectorUDTf71b0bce.'
```
So, I fixed them to use new ones just identically with some Python tests fixed in https://github.com/apache/spark/pull/12627
## How was this patch tested?
Manually tested for all the examples listed by `grep -r "from pyspark.mllib" .`.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #13393 from HyukjinKwon/SPARK-14615.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if possible
## What changes were proposed in this pull request?
Instead of using local variable `sc` like the following example, this PR uses `spark.sparkContext`. This makes examples more concise, and also fixes some misleading, i.e., creating SparkContext from SparkSession.
```
- println("Creating SparkContext")
- val sc = spark.sparkContext
-
println("Writing local file to DFS")
val dfsFilename = dfsDirPath + "/dfs_read_write_test"
- val fileRDD = sc.parallelize(fileContents)
+ val fileRDD = spark.sparkContext.parallelize(fileContents)
```
This will change 12 files (+30 lines, -52 lines).
## How was this patch tested?
Manual.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #13520 from dongjoon-hyun/SPARK-15773.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ML examples
## What changes were proposed in this pull request?
Since [SPARK-15617](https://issues.apache.org/jira/browse/SPARK-15617) deprecated ```precision``` in ```MulticlassClassificationEvaluator```, many ML examples broken.
```python
pyspark.sql.utils.IllegalArgumentException: u'MulticlassClassificationEvaluator_4c3bb1d73d8cc0cedae6 parameter metricName given invalid value precision.'
```
We should use ```accuracy``` to replace ```precision``` in these examples.
## How was this patch tested?
Offline tests.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #13519 from yanboliang/spark-15771.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
In the MLLib naivebayes example, scala and python example doesn't use libsvm data, but Java does.
I make changes in scala and python example to use the libsvm data as the same as Java example.
## How was this patch tested?
Manual tests
Author: wm624@hotmail.com <wm624@hotmail.com>
Closes #13301 from wangmiao1981/example.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SparkSession.sparkContext instead of _sc
## What changes were proposed in this pull request?
Some PySpark examples need a SparkContext and get it by accessing _sc directly from the session. These examples should use the provided property `sparkContext` in `SparkSession` instead.
## How was this patch tested?
Ran modified examples
Author: Bryan Cutler <cutlerb@gmail.com>
Closes #13303 from BryanCutler/pyspark-session-sparkContext-MINOR.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Use `SparkSession` according to [SPARK-15031](https://issues.apache.org/jira/browse/SPARK-15031)
`MLLLIB` is not recommended to use now, so examples in `MLLIB` are ignored in this PR.
`StreamingContext` can not be directly obtained from `SparkSession`, so example in `Streaming` are ignored too.
cc andrewor14
## How was this patch tested?
manual tests with spark-submit
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #13164 from zhengruifeng/use_sparksession_ii.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
MLlib are not recommended to use, and some methods are even deprecated.
Update the warning message to recommend ML usage.
```
def showWarning() {
System.err.println(
"""WARN: This is a naive implementation of Logistic Regression and is given as an example!
|Please use either org.apache.spark.mllib.classification.LogisticRegressionWithSGD or
|org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
|for more conventional use.
""".stripMargin)
}
```
To
```
def showWarning() {
System.err.println(
"""WARN: This is a naive implementation of Logistic Regression and is given as an example!
|Please use org.apache.spark.ml.classification.LogisticRegression
|for more conventional use.
""".stripMargin)
}
```
## How was this patch tested?
local build
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #13190 from zhengruifeng/update_recd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SparkSession
## What changes were proposed in this pull request?
It seems most of Python examples were changed to use SparkSession by https://github.com/apache/spark/pull/12809. This PR said both examples below:
- `simple_params_example.py`
- `aft_survival_regression.py`
are not changed because it dose not work. It seems `aft_survival_regression.py` is changed by https://github.com/apache/spark/pull/13050 but `simple_params_example.py` is not yet.
This PR corrects the example and make this use SparkSession.
In more detail, it seems `threshold` is replaced to `thresholds` here and there by https://github.com/apache/spark/commit/5a23213c148bfe362514f9c71f5273ebda0a848a. However, when it calls `lr.fit(training, paramMap)` this overwrites the values. So, `threshold` was 5 and `thresholds` becomes 5.5 (by `1 / (1 + thresholds(0) / thresholds(1)`).
According to the comment below. this is not allowed, https://github.com/apache/spark/blob/354f8f11bd4b20fa99bd67a98da3525fd3d75c81/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala#L58-L61.
So, in this PR, it sets the equivalent value so that this does not throw an exception.
## How was this patch tested?
Manully (`mvn package -DskipTests && spark-submit simple_params_example.py`)
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #13135 from HyukjinKwon/SPARK-15031.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dataset.registerTempTable
## What changes were proposed in this pull request?
Update the unit test code, examples, and documents to remove calls to deprecated method `dataset.registerTempTable`.
## How was this patch tested?
This PR only changes the unit test code, examples, and comments. It should be safe.
This is a follow up of PR https://github.com/apache/spark/pull/12945 which was merged.
Author: Sean Zhong <seanzhong@databricks.com>
Closes #13098 from clockfly/spark-15171-remove-deprecation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
Add guide doc and examples for GaussianMixture in Spark.ml in Java, Scala and Python.
## How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
Manual compile and test all examples
Author: wm624@hotmail.com <wm624@hotmail.com>
Closes #12788 from wangmiao1981/example.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add Scala/Java/Python examples for ```GeneralizedLinearRegression```.
## How was this patch tested?
They are examples and have been tested offline.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #12754 from yanboliang/spark-14979.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Deprecates registerTempTable and add dataset.createTempView, dataset.createOrReplaceTempView.
## How was this patch tested?
Unit tests.
Author: Sean Zhong <seanzhong@databricks.com>
Closes #12945 from clockfly/spark-15171.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
1,create a libsvm-type dataset for lda: `data/mllib/sample_lda_libsvm_data.txt`
2,add python example
3,directly read the datafile in examples
4,BTW, change to `SparkSession` in `aft_survival_regression.py`
## How was this patch tested?
manual tests
`./bin/spark-submit examples/src/main/python/ml/lda_example.py`
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12927 from zhengruifeng/lda_pe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Python example for ml.kmeans already exists, but not included in user guide.
1,small changes like: `example_on` `example_off`
2,add it to user guide
3,update examples to directly read datafile
## How was this patch tested?
manual tests
`./bin/spark-submit examples/src/main/python/ml/kmeans_example.py
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12925 from zhengruifeng/km_pe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ml.BisectingKMeans
## What changes were proposed in this pull request?
1, add BisectingKMeans to ml-clustering.md
2, add the missing Scala BisectingKMeansExample
3, create a new datafile `data/mllib/sample_kmeans_data.txt`
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #11844 from zhengruifeng/doc_bkm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
1, Add python example for OneVsRest
2, remove args-parsing
## How was this patch tested?
manual tests
`./bin/spark-submit examples/src/main/python/ml/one_vs_rest_example.py`
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12920 from zhengruifeng/ovr_pe.
|
|
|
|
|
|
|
|
|
|
| |
This PR removes `sqlContext` in examples. Actual usage was all replaced in https://github.com/apache/spark/pull/12809 but there are some in comments.
Manual style checking.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #13006 from HyukjinKwon/minor-docs.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cleans up ALS examples by removing unnecessary casts to double for `rating` and `prediction` columns, since `RegressionEvaluator` now supports `Double` & `Float` input types.
## How was this patch tested?
Manual compile and run with `run-example ml.ALSExample` and `spark-submit examples/src/main/python/ml/als_example.py`.
Author: Nick Pentreath <nickp@za.ibm.com>
Closes #12892 from MLnick/als-examples-cleanup.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add the missing python example for QuantileDiscretizer
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12281 from zhengruifeng/discret_pe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
binary_classification_metrics_example.py
## What changes were proposed in this pull request?
This issue addresses the comments in SPARK-15031 and also fix java-linter errors.
- Use multiline format in SparkSession builder patterns.
- Update `binary_classification_metrics_example.py` to use `SparkSession`.
- Fix Java Linter errors (in SPARK-13745, SPARK-15031, and so far)
## How was this patch tested?
After passing the Jenkins tests and run `dev/lint-java` manually.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #12911 from dongjoon-hyun/SPARK-15134.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
This PR aims to update Scala/Python/Java examples by replacing `SQLContext` with newly added `SparkSession`.
- Use **SparkSession Builder Pattern** in 154(Scala 55, Java 52, Python 47) files.
- Add `getConf` in Python SparkContext class: `python/pyspark/context.py`
- Replace **SQLContext Singleton Pattern** with **SparkSession Singleton Pattern**:
- `SqlNetworkWordCount.scala`
- `JavaSqlNetworkWordCount.java`
- `sql_network_wordcount.py`
Now, `SQLContexts` are used only in R examples and the following two Python examples. The python examples are untouched in this PR since it already fails some unknown issue.
- `simple_params_example.py`
- `aft_survival_regression.py`
## How was this patch tested?
Manual.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #12809 from dongjoon-hyun/SPARK-15031.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add python3 compatibility in python examples
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12868 from zhengruifeng/fix_gmm_py.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PySpark.
## What changes were proposed in this pull request?
This is a python port of corresponding Scala builder pattern code. `sql.py` is modified as a target example case.
## How was this patch tested?
Manual.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #12860 from dongjoon-hyun/SPARK-15084.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add the missing python example for VectorSlicer
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12282 from zhengruifeng/vecslicer_pe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, make all dependencies in the examples module provided, and explicitly
list a couple of ones that somehow are promoted to compile by maven. This
means that to run streaming examples, the streaming connector package needs
to be provided to run-examples using --packages or --jars, just like regular
apps.
Also, remove a couple of outdated examples. HBase has had Spark bindings for
a while and is even including them in the HBase distribution in the next
version, making the examples obsolete. The same applies to Cassandra, which
seems to have a proper Spark binding library already.
I just tested the build, which passes, and ran SparkPi. The examples jars
directory now has only two jars:
```
$ ls -1 examples/target/scala-2.11/jars/
scopt_2.11-3.3.0.jar
spark-examples_2.11-2.0.0-SNAPSHOT.jar
```
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes #12544 from vanzin/SPARK-14744.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Currently, the docs for TF-IDF only refer to using HashingTF with IDF. However, CountVectorizer can also be used. We should probably amend the user guide and examples to show this.
## How was this patch tested?
unit tests and doc generation
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes #12454 from hhbyyh/tfdoc.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add the missing python example for ChiSqSelector
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12283 from zhengruifeng/chi2_pe.
|
|
|
|
|
|
|
|
|
|
| |
jira: https://issues.apache.org/jira/browse/SPARK-13089
Add section in ml-classification.md for NaiveBayes DataFrame-based API, plus example code (using include_example to clip code from examples/ folder files).
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes #11015 from hhbyyh/naiveBayesDoc.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
Add python CountVectorizerExample
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #11917 from zhengruifeng/cv_pe.
|
|
|
|
|
|
|
|
|
|
|
|
| |
## What changes were proposed in this pull request?
add three python examples
## How was this patch tested?
manual tests
Author: Zheng RuiFeng <ruifengz@foxmail.com>
Closes #12063 from zhengruifeng/dct_pe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
streaming-mqtt and streaming-twitter
## What changes were proposed in this pull request?
This PR removes all docs about the old streaming-akka, streaming-zeromq, streaming-mqtt and streaming-twitter projects since I have already copied them to https://github.com/spark-packages
Also remove mqtt_wordcount.py that I forgot to remove previously.
## How was this patch tested?
Jenkins PR Build.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #11824 from zsxwing/remove-doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
using include_example
Replace example code in mllib-feature-extraction.md using include_example
https://issues.apache.org/jira/browse/SPARK-13017
The example code in the user guide is embedded in the markdown and hence it is not easy to test. It would be nice to automatically test them. This JIRA is to discuss options to automate example code testing and see what we can do in Spark 1.6.
Goal is to move actual example code to spark/examples and test compilation in Jenkins builds. Then in the markdown, we can reference part of the code to show in the user guide. This requires adding a Jekyll tag that is similar to https://github.com/jekyll/jekyll/blob/master/lib/jekyll/tags/include.rb, e.g., called include_example.
`{% include_example scala/org/apache/spark/examples/mllib/TFIDFExample.scala %}`
Jekyll will find `examples/src/main/scala/org/apache/spark/examples/mllib/TFIDFExample.scala` and pick code blocks marked "example" and replace code block in
`{% highlight %}`
in the markdown.
See more sub-tasks in parent ticket: https://issues.apache.org/jira/browse/SPARK-11337
Author: Xin Ren <iamshrek@126.com>
Closes #11142 from keypointt/SPARK-13017.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mllib-statistics.md using include_example
## What changes were proposed in this pull request?
This PR for ticket SPARK-13019 is based on previous PR(https://github.com/apache/spark/pull/11108).
Since PR(https://github.com/apache/spark/pull/11108) is breaking scala-2.10 build, more work is needed to fix build errors.
What I did new in this PR is adding keyword argument for 'fractions':
` val approxSample = data.sampleByKey(withReplacement = false, fractions = fractions)`
` val exactSample = data.sampleByKeyExact(withReplacement = false, fractions = fractions)`
I reopened ticket on JIRA but sorry I don't know how to reopen a GitHub pull request, so I just submitting a new pull request.
## How was this patch tested?
Manual build testing on local machine, build based on scala-2.10.
Author: Xin Ren <iamshrek@126.com>
Closes #11901 from keypointt/SPARK-13019.
|
|
|
|
|
|
| |
using include_example"
This reverts commit 1af8de200c4d3357bcb09e7bbc6deece00e885f2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
include_example
https://issues.apache.org/jira/browse/SPARK-13019
The example code in the user guide is embedded in the markdown and hence it is not easy to test. It would be nice to automatically test them. This JIRA is to discuss options to automate example code testing and see what we can do in Spark 1.6.
Goal is to move actual example code to spark/examples and test compilation in Jenkins builds. Then in the markdown, we can reference part of the code to show in the user guide. This requires adding a Jekyll tag that is similar to https://github.com/jekyll/jekyll/blob/master/lib/jekyll/tags/include.rb, e.g., called include_example.
`{% include_example scala/org/apache/spark/examples/mllib/SummaryStatisticsExample.scala %}`
Jekyll will find `examples/src/main/scala/org/apache/spark/examples/mllib/SummaryStatisticsExample.scala` and pick code blocks marked "example" and replace code block in
`{% highlight %}`
in the markdown.
See more sub-tasks in parent ticket: https://issues.apache.org/jira/browse/SPARK-11337
Author: Xin Ren <iamshrek@126.com>
Closes #11108 from keypointt/SPARK-13019.
|