diff options
author | Felix Cheung <felixcheung_m@hotmail.com> | 2016-10-20 21:12:55 -0700 |
---|---|---|
committer | Felix Cheung <felixcheung@apache.org> | 2016-10-20 21:12:55 -0700 |
commit | 3180272d2d49e440516085c0e4aebd5bad18bcad (patch) | |
tree | 0fbaed47b4468a0fb98d2c85ccd561420398dca1 /R/pkg/inst/tests | |
parent | 1bb99c4887e97ae5f55c8c2b392ba5ca72d6168b (diff) | |
download | spark-3180272d2d49e440516085c0e4aebd5bad18bcad.tar.gz spark-3180272d2d49e440516085c0e4aebd5bad18bcad.tar.bz2 spark-3180272d2d49e440516085c0e4aebd5bad18bcad.zip |
[SPARKR] fix warnings
## What changes were proposed in this pull request?
Fix for a bunch of test warnings that were added recently.
We need to investigate why warnings are not turning into errors.
```
Warnings -----------------------------------------------------------------------
1. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Sepal_Length instead of Sepal.Length as column name
2. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Sepal_Width instead of Sepal.Width as column name
3. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Petal_Length instead of Petal.Length as column name
4. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Petal_Width instead of Petal.Width as column name
Consider adding
importFrom("utils", "object.size")
to your NAMESPACE file.
```
## How was this patch tested?
unit tests
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes #15560 from felixcheung/rwarnings.
Diffstat (limited to 'R/pkg/inst/tests')
-rw-r--r-- | R/pkg/inst/tests/testthat/test_sparkSQL.R | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/R/pkg/inst/tests/testthat/test_sparkSQL.R b/R/pkg/inst/tests/testthat/test_sparkSQL.R index af81d0586e..1c806869e9 100644 --- a/R/pkg/inst/tests/testthat/test_sparkSQL.R +++ b/R/pkg/inst/tests/testthat/test_sparkSQL.R @@ -212,7 +212,7 @@ test_that("createDataFrame uses files for large objects", { # To simulate a large file scenario, we set spark.r.maxAllocationLimit to a smaller value conf <- callJMethod(sparkSession, "conf") callJMethod(conf, "set", "spark.r.maxAllocationLimit", "100") - df <- createDataFrame(iris) + df <- suppressWarnings(createDataFrame(iris)) # Resetting the conf back to default value callJMethod(conf, "set", "spark.r.maxAllocationLimit", toString(.Machine$integer.max / 10)) |