| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Add ```covar_samp``` and ```covar_pop``` for SparkR.
Should we also provide ```cov``` alias for ```covar_samp```? There is ```cov``` implementation at stats.R which masks ```stats::cov``` already, but may bring to breaking API change.
cc sun-rui felixcheung shivaram
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10829 from yanboliang/spark-12903.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #10201 from sun-rui/SPARK-12204.
|
|
|
|
|
|
|
|
| |
shivaram sorry it took longer to fix some conflicts, this is the change to add an alias for `table`
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #10406 from felixcheung/readtable.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #10309 from sun-rui/SPARK-12337.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently this is reported when loading the SparkR package in R (probably would add is.nan)
```
Loading required package: methods
Attaching package: ‘SparkR’
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var
The following objects are masked from ‘package:base’:
colnames, colnames<-, intersect, rank, rbind, sample, subset,
summary, table, transform
```
Adding this test adds an automated way to track changes to masked method.
Also, the second part of this test check for those functions that would not be accessible without namespace/package prefix.
Incidentally, this might point to how we would fix those inaccessible functions in base or stats.
Looking for feedback for adding this test.
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #10171 from felixcheung/rmaskedtest.
|
|
|
|
|
|
|
|
|
| |
Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.usca.ibm.com>
Author: Oscar D. Lara Yejas <olarayej@mail.usf.edu>
Author: Oscar D. Lara Yejas <oscar.lara.yejas@us.ibm.com>
Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.attlocal.net>
Closes #9613 from olarayej/SPARK-11031.
|
|
|
|
|
|
|
|
| |
Add ```hash``` function for SparkR ```DataFrame```.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10597 from yanboliang/spark-12645.
|
|
|
|
|
|
|
|
|
| |
Add ```read.text``` and ```write.text``` for SparkR.
cc sun-rui felixcheung shivaram
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10348 from yanboliang/spark-12393.
|
|
|
|
|
|
|
|
| |
Add ```write.json``` and ```write.parquet``` for SparkR, and deprecated ```saveAsParquetFile```.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10281 from yanboliang/spark-12310.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ```jsonFile``` should support multiple input files, such as:
```R
jsonFile(sqlContext, c(“path1”, “path2”)) # character vector as arguments
jsonFile(sqlContext, “path1,path2”)
```
* Meanwhile, ```jsonFile``` has been deprecated by Spark SQL and will be removed at Spark 2.0. So we mark ```jsonFile``` deprecated and use ```read.json``` at SparkR side.
* Replace all ```jsonFile``` with ```read.json``` at test_sparkSQL.R, but still keep jsonFile test case.
* If this PR is accepted, we should also make almost the same change for ```parquetFile```.
cc felixcheung sun-rui shivaram
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10145 from yanboliang/spark-12146.
|
|
|
|
|
|
|
|
| |
SparkR support ```read.parquet``` and deprecate ```parquetFile```. This change is similar with #10145 for ```jsonFile```.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10191 from yanboliang/spark-12198.
|
|
|
|
|
|
|
|
| |
SparkR.
Author: Sun Rui <rui.sun@intel.com>
Closes #9804 from sun-rui/SPARK-11774.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for for colnames, colnames<-, coltypes<-
Also added tests for names, names<- which have no test previously.
I merged with PR 8984 (coltypes). Clicked the wrong thing, crewed up the PR. Recreated it here. Was #9218
shivaram sun-rui
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #9654 from felixcheung/colnamescoltypes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change ```cumeDist -> cume_dist, denseRank -> dense_rank, percentRank -> percent_rank, rowNumber -> row_number``` at SparkR side.
There are two reasons that we should make this change:
* We should follow the [naming convention rule of R](http://www.inside-r.org/node/230645)
* Spark DataFrame has deprecated the old convention (such as ```cumeDist```) and will remove it in Spark 2.0.
It's better to fix this issue before 1.6 release, otherwise we will make breaking API change.
cc shivaram sun-rui
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10016 from yanboliang/SPARK-12025.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #9764 from sun-rui/SPARK-11773.
|
|
|
|
|
|
|
|
|
|
| |
Checked names, none of them should conflict with anything in base
shivaram davies rxin
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #9489 from felixcheung/rstddev.
|
|
|
|
|
|
|
|
| |
This is a follow up on PR #8984, as the corresponding branch for such PR was damaged.
Author: Oscar D. Lara Yejas <olarayej@mail.usf.edu>
Closes #9579 from olarayej/SPARK-10863_NEW14.
|
|
|
|
|
|
|
| |
Author: adrian555 <wzhuang@us.ibm.com>
Author: Adrian Zhuang <adrian555@users.noreply.github.com>
Closes #9443 from adrian555/with.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #9196 from sun-rui/SPARK-11210.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #9193 from sun-rui/SPARK-11209.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #9023 from sun-rui/SPARK-10996.
|
|
|
|
|
|
|
|
|
| |
Bring the change code up to date.
Author: Adrian Zhuang <adrian555@users.noreply.github.com>
Author: adrian555 <wzhuang@us.ibm.com>
Closes #9031 from adrian555/attach2.
|
|
|
|
|
|
|
|
|
| |
as.DataFrame is more a R-style like signature.
Also, I'd like to know if we could make the context, e.g. sqlContext global, so that we do not have to specify it as an argument, when we each time create a dataframe.
Author: Narine Kokhlikyan <narine.kokhlikyan@gmail.com>
Closes #8952 from NarineK/sparkrasDataFrame.
|
|
|
|
|
|
|
|
|
|
|
| |
1. Add a "col" function into DataFrame.
2. Move the current "col" function in Column.R to functions.R, convert it to S4 function.
3. Add a s4 "column" function in functions.R.
4. Convert the "column" function in Column.R to S4 function. This is for private use.
Author: Sun Rui <rui.sun@intel.com>
Closes #8864 from sun-rui/SPARK-10079.
|
|
|
|
|
|
|
|
|
|
|
| |
[SPARK-10905][SparkR]: Export freqItems() for DataFrameStatFunctions
- Add function (together with roxygen2 doc) to DataFrame.R and generics.R
- Expose the function in NAMESPACE
- Add unit test for the function
Author: Rerngvit Yanggratoke <rerngvit@kth.se>
Closes #8962 from rerngvit/SPARK-10905.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #8869 from sun-rui/SPARK-10752.
|
|
|
|
|
|
|
|
|
|
| |
Created method as.data.frame as a synonym for collect().
Author: Oscar D. Lara Yejas <olarayej@mail.usf.edu>
Author: olarayej <oscar.lara.yejas@us.ibm.com>
Author: Oscar D. Lara Yejas <oscar.lara.yejas@us.ibm.com>
Closes #8908 from olarayej/SPARK-10807.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add subset and transform
Also reorganize `[` & `[[` to subset instead of select
Note: for transform, transform is very similar to mutate. Spark doesn't seem to replace existing column with the name in mutate (ie. `mutate(df, age = df$age + 2)` - returned DataFrame has 2 columns with the same name 'age'), so therefore not doing that for now in transform.
Though it is clearly stated it should replace column with matching name (should I open a JIRA for mutate/transform?)
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #8503 from felixcheung/rsubset_transform.
|
|
|
|
|
|
|
|
|
|
| |
I also checked all the other functions defined in column.R, functions.R and DataFrame.R and everything else looked fine.
cc yu-iskw
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #8473 from shivaram/in-namespace.
|
|
|
|
|
|
|
|
|
| |
### JIRA
[[SPARK-10106] Add `ifelse` Column function to SparkR - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-10106)
Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Closes #8303 from yu-iskw/SPARK-10106.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
complicated
I added lots of Column functinos into SparkR. And I also added `rand(seed: Int)` and `randn(seed: Int)` in Scala. Since we need such APIs for R integer type.
### JIRA
[[SPARK-9856] Add expression functions into SparkR whose params are complicated - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-9856)
Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Closes #8264 from yu-iskw/SPARK-9856-3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add `when` and `otherwise` as `Column` methods
- Add `When` as an expression function
- Add `%otherwise%` infix as an alias of `otherwise`
Since R doesn't support a feature like method chaining, `otherwise(when(condition, value), value)` style is a little annoying for me. If `%otherwise%` looks strange for shivaram, I can remove it. What do you think?
### JIRA
[[SPARK-10075] Add `when` expressino function in SparkR - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-10075)
Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Closes #8266 from yu-iskw/SPARK-10075.
|
|
|
|
|
|
|
|
|
|
|
| |
parameters functions
### JIRA
[[SPARK-10007] Update `NAMESPACE` file in SparkR for simple parameters functions - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-10007)
Author: Yuu ISHIKAWA <yuu.ishikawa@gmail.com>
Closes #8277 from yu-iskw/SPARK-10007.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
variable parameter
### Summary
- Add `lit` function
- Add `concat`, `greatest`, `least` functions
I think we need to improve `collect` function in order to implement `struct` function. Since `collect` doesn't work with arguments which includes a nested `list` variable. It seems that a list against `struct` still has `jobj` classes. So it would be better to solve this problem on another issue.
### JIRA
[[SPARK-9871] Add expression functions into SparkR which have a variable parameter - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-9871)
Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Closes #8194 from yu-iskw/SPARK-9856.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on DataFrames
This PR adds synonyms for ```merge``` and ```summary``` in SparkR DataFrame API.
cc shivaram
Author: Hossein <hossein@databricks.com>
Closes #7806 from falaki/SPARK-9320 and squashes the following commits:
72600f7 [Hossein] Updated docs
92a6e75 [Hossein] Fixed merge generic signature issue
4c2b051 [Hossein] Fixing naming with mllib summary
0f3a64c [Hossein] Added ... to generic for merge
30fbaf8 [Hossein] Merged master
ae1a4cf [Hossein] Merge branch 'master' into SPARK-9320
e8eb86f [Hossein] Add a generic for merge
fc01f2d [Hossein] Added unit test
8d92012 [Hossein] Added merge as an alias for join
5b8bedc [Hossein] Added unit test
632693d [Hossein] Added summary as an alias for describe for DataFrame
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functions in DataFrames
Adds following aliases:
* unique (distinct)
* rbind (unionAll): accepts many DataFrames
* nrow (count)
* ncol
* dim
* names (columns): along with the replacement function to change names
Author: Hossein <hossein@databricks.com>
Closes #7764 from falaki/sparkR-alias and squashes the following commits:
56016f5 [Hossein] Updated R documentation
5e4a4d0 [Hossein] Removed extra code
f51cbef [Hossein] Merge branch 'master' into sparkR-alias
c1b88bd [Hossein] Moved setGeneric and other comments applied
d9307f8 [Hossein] Added tests
b5aa988 [Hossein] Added dim, ncol, nrow, names, rbind, and unique functions to DataFrames
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Preview:
```
> summary(m)
features coefficients
1 (Intercept) 1.6765001
2 Sepal_Length 0.3498801
3 Species.versicolor -0.9833885
4 Species.virginica -1.0075104
```
Design doc from umbrella task: https://docs.google.com/document/d/10NZNSEurN2EdWM31uFYsgayIPfCFHiuIu3pCWrUmP_c/edit
cc mengxr
Author: Eric Liang <ekl@databricks.com>
Closes #7771 from ericl/summary and squashes the following commits:
ccd54c3 [Eric Liang] second pass
a5ca93b [Eric Liang] comments
2772111 [Eric Liang] clean up
70483ef [Eric Liang] fix test
7c247d4 [Eric Liang] Merge branch 'master' into summary
3c55024 [Eric Liang] working
8c539aa [Eric Liang] first pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add `crosstab` to SparkR DataFrames, which takes two column names and returns a local R data.frame. This is similar to `table` in R. However, `table` in SparkR is used for loading SQL tables as DataFrames. The return type is data.frame instead table for `crosstab` to be compatible with Scala/Python.
I couldn't run R tests successfully on my local. Many unit tests failed. So let's try Jenkins.
Author: Xiangrui Meng <meng@databricks.com>
Closes #7318 from mengxr/SPARK-8364 and squashes the following commits:
d75e894 [Xiangrui Meng] fix tests
53f6ddd [Xiangrui Meng] fix tests
f1348d6 [Xiangrui Meng] update test
47cb088 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-8364
5621262 [Xiangrui Meng] first version without test
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This exposes the SparkR:::glm() and SparkR:::predict() APIs. It was necessary to change RFormula to silently drop the label column if it was missing from the input dataset, which is kind of a hack but necessary to integrate with the Pipeline API.
The umbrella design doc for MLlib + SparkR integration can be viewed here: https://docs.google.com/document/d/10NZNSEurN2EdWM31uFYsgayIPfCFHiuIu3pCWrUmP_c/edit
mengxr
Author: Eric Liang <ekl@databricks.com>
Closes #7483 from ericl/spark-8774 and squashes the following commits:
3dfac0c [Eric Liang] update
17ef516 [Eric Liang] more comments
1753a0f [Eric Liang] make glm generic
b0f50f8 [Eric Liang] equivalence test
550d56d [Eric Liang] export methods
c015697 [Eric Liang] second pass
117949a [Eric Liang] comments
5afbc67 [Eric Liang] test label columns
6b7f15f [Eric Liang] Fri Jul 17 14:20:22 PDT 2015
3a63ae5 [Eric Liang] Fri Jul 17 13:41:52 PDT 2015
ce61367 [Eric Liang] Fri Jul 17 13:41:17 PDT 2015
0299c59 [Eric Liang] Fri Jul 17 13:40:32 PDT 2015
e37603f [Eric Liang] Fri Jul 17 12:15:03 PDT 2015
d417d0c [Eric Liang] Merge remote-tracking branch 'upstream/master' into spark-8774
29a2ce7 [Eric Liang] Merge branch 'spark-8774-1' into spark-8774
d1959d2 [Eric Liang] clarify comment
2db68aa [Eric Liang] second round of comments
dc3c943 [Eric Liang] address comments
5765ec6 [Eric Liang] fix style checks
1f361b0 [Eric Liang] doc
d33211b [Eric Liang] r support
fb0826b [Eric Liang] [SPARK-8774] Add R model formula with basic support as a transformer
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA: https://issues.apache.org/jira/browse/SPARK-8807
Add between operator in SparkR.
Author: Liang-Chi Hsieh <viirya@appier.com>
Closes #7356 from viirya/add_r_between and squashes the following commits:
7f51b44 [Liang-Chi Hsieh] Add test for non-numeric column.
c6a25c5 [Liang-Chi Hsieh] Add between function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This pull request adds following methods to SparkR:
```R
setJobGroup()
cancelJobGroup()
clearJobGroup()
```
For each method, the spark context is passed as the first argument. There does not seem to be a good way to test these in R.
cc shivaram and davies
Author: Hossein <hossein@databricks.com>
Closes #6889 from falaki/SPARK-8452 and squashes the following commits:
9ce9f1e [Hossein] Added basic tests to verify methods can be called and won't throw errors
c706af9 [Hossein] Added examples
a2c19af [Hossein] taking spark context as first argument
343ca77 [Hossein] Added setJobGroup, cancelJobGroup and clearJobGroup to SparkR
|
|
|
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #6183 from sun-rui/SPARK-7227 and squashes the following commits:
dd6f5b3 [Sun Rui] Rename readEnv() back to readMap(). Add alias na.omit() for dropna().
41cf725 [Sun Rui] [SPARK-7227][SPARKR] Support fillna / dropna in R DataFrame.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change also remove native libraries from SparkR to make sure our distribution works across platforms
Tested by building on Mac, running on Amazon Linux (CentOS), Windows VM and vice-versa (built on Linux run on Mac)
I will also test this with YARN soon and update this PR.
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #6373 from shivaram/sparkr-binary and squashes the following commits:
ae41b5c [Shivaram Venkataraman] Remove native libraries from SparkR Also include the built SparkR package in make-distribution.sh
|
|
|
|
|
|
|
|
|
|
| |
Author: qhuang <qian.huang@intel.com>
Closes #6170 from hqzizania/master and squashes the following commits:
f20c39f [qhuang] add tests units and fixes
2a7d121 [qhuang] use a function name more familiar to R users
07aa72e [qhuang] Support math functions in R DataFrame
|
|
|
|
|
|
|
|
|
|
|
|
| |
their counterparts in Scala.
Author: Sun Rui <rui.sun@intel.com>
Closes #6007 from sun-rui/SPARK-7482 and squashes the following commits:
5c5cf5e [Sun Rui] Implement alias loadDF() as a new function.
3a30c10 [Sun Rui] Rename load()/save() to read.df()/write.df(). Also add loadDF()/saveDF() as aliases.
9f569d6 [Sun Rui] [SPARK-7482][SparkR] Rename some DataFrame API methods in SparkR to match their counterparts in Scala.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes include
1. Rename sortDF to arrange
2. Add new aliases `group_by` and `sample_frac`, `summarize`
3. Add more user friendly column addition (mutate), rename
4. Support mean as an alias for avg in Scala and also support n_distinct, n as in dplyr
Using these changes we can pretty much run the examples as described in http://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html with the same syntax
The only thing missing in SparkR is auto resolving column names when used in an expression i.e. making something like `select(flights, delay)` works in dply but we right now need `select(flights, flights$delay)` or `select(flights, "delay")`. But this is a complicated change and I'll file a new issue for it
cc sun-rui rxin
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #6005 from shivaram/sparkr-df-api and squashes the following commits:
5e0716a [Shivaram Venkataraman] Fix some roxygen bugs
1254953 [Shivaram Venkataraman] Merge branch 'master' of https://github.com/apache/spark into sparkr-df-api
0521149 [Shivaram Venkataraman] Changes to make SparkR DataFrame dplyr friendly. Changes include 1. Rename sortDF to arrange 2. Add new aliases `group_by` and `sample_frac`, `summarize` 3. Add more user friendly column addition (mutate), rename 4. Support mean as an alias for avg in Scala and also support n_distinct, n as in dplyr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch also removes the RDD docs from being built as a part of roxygen just by the method to delete
" ' '" of " \#' ".
Author: hqzizania <qian.huang@intel.com>
Author: qhuang <qian.huang@intel.com>
Closes #5969 from hqzizania/R1 and squashes the following commits:
6d27696 [qhuang] fixes in NAMESPACE
eb4b095 [qhuang] remove more docs
6394579 [qhuang] remove RDD docs in generics.R
6813860 [hqzizania] Fill the docs for DataFrame API in SparkR
857220f [hqzizania] remove the pairRDD docs from being built as a part of roxygen
c045d64 [hqzizania] remove the RDD docs from being built as a part of roxygen
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR also makes some of the DataFrame to RDD methods private as the RDD class is private in 1.4
cc rxin pwendell
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #5949 from shivaram/sparkr-examples and squashes the following commits:
6c42fdc [Shivaram Venkataraman] Remove SparkR RDD examples, add dataframe examples
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Moving here from https://github.com/amplab-extras/SparkR-pkg/pull/241
sum() has been implemented. (https://github.com/amplab-extras/SparkR-pkg/pull/242)
Now Phase 1: mean, sd, var have been implemented, but some things still need to be improved with the suggestions in https://issues.apache.org/jira/browse/SPARK-6841
Author: qhuang <qian.huang@intel.com>
Closes #5446 from hqzizania/R and squashes the following commits:
f283572 [qhuang] add test unit for describe()
2e74d5a [qhuang] add describe() DataFrame API
|
|
|
|
|
|
|
|
|
|
| |
This change makes the RDD API private in SparkR and all internal uses of the SparkR API use SparkR::: to access private functions.
Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Closes #5895 from shivaram/rrdd-private and squashes the following commits:
bdb2f07 [Shivaram Venkataraman] Make RDD private in SparkR. This change also makes all internal uses of the SparkR API use SparkR::: to access private functions
|