diff options
author | Junyang Qian <junyangq@databricks.com> | 2016-08-10 11:18:23 -0700 |
---|---|---|
committer | Shivaram Venkataraman <shivaram@cs.berkeley.edu> | 2016-08-10 11:18:23 -0700 |
commit | 214ba66a030bc3a718c567a742b0db44bf911d61 (patch) | |
tree | 08df168f2981e89321fbbb7418c1690c7cfc6958 /R/pkg/inst/tests/testthat/test_sparkSQL.R | |
parent | d4a9122430d6c3aeaaee32aa09d314016ff6ddc7 (diff) | |
download | spark-214ba66a030bc3a718c567a742b0db44bf911d61.tar.gz spark-214ba66a030bc3a718c567a742b0db44bf911d61.tar.bz2 spark-214ba66a030bc3a718c567a742b0db44bf911d61.zip |
[SPARK-16579][SPARKR] add install.spark function
## What changes were proposed in this pull request?
Add an install_spark function to the SparkR package. User can run `install_spark()` to install Spark to a local directory within R.
Updates:
Several changes have been made:
- `install.spark()`
- check existence of tar file in the cache folder, and download only if not found
- trial priority of mirror_url look-up: user-provided -> preferred mirror site from apache website -> hardcoded backup option
- use 2.0.0
- `sparkR.session()`
- can install spark when not found in `SPARK_HOME`
## How was this patch tested?
Manual tests, running the check-cran.sh script added in #14173.
Author: Junyang Qian <junyangq@databricks.com>
Closes #14258 from junyangq/SPARK-16579.
Diffstat (limited to 'R/pkg/inst/tests/testthat/test_sparkSQL.R')
-rw-r--r-- | R/pkg/inst/tests/testthat/test_sparkSQL.R | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/R/pkg/inst/tests/testthat/test_sparkSQL.R b/R/pkg/inst/tests/testthat/test_sparkSQL.R index 3f3cb766b3..39ed4febe5 100644 --- a/R/pkg/inst/tests/testthat/test_sparkSQL.R +++ b/R/pkg/inst/tests/testthat/test_sparkSQL.R @@ -1824,11 +1824,11 @@ test_that("describe() and summarize() on a DataFrame", { expect_equal(collect(stats)[2, "age"], "24.5") expect_equal(collect(stats)[3, "age"], "7.7781745930520225") stats <- describe(df) - expect_equal(collect(stats)[4, "name"], "Andy") + expect_equal(collect(stats)[4, "summary"], "min") expect_equal(collect(stats)[5, "age"], "30") stats2 <- summary(df) - expect_equal(collect(stats2)[4, "name"], "Andy") + expect_equal(collect(stats2)[4, "summary"], "min") expect_equal(collect(stats2)[5, "age"], "30") # SPARK-16425: SparkR summary() fails on column of type logical |