aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSun Rui <rui.sun@intel.com>2015-05-06 22:48:16 -0700
committerShivaram Venkataraman <shivaram@cs.berkeley.edu>2015-05-06 22:48:16 -0700
commit9cfa9a516ed991de6c5900c7285b47380a396142 (patch)
treee3d1e5f551e171914631ef59f41fbc5238e45cdb
parent773aa25252f29ca25dbb1ee495b530557fe79405 (diff)
downloadspark-9cfa9a516ed991de6c5900c7285b47380a396142.tar.gz
spark-9cfa9a516ed991de6c5900c7285b47380a396142.tar.bz2
spark-9cfa9a516ed991de6c5900c7285b47380a396142.zip
[SPARK-6812] [SPARKR] filter() on DataFrame does not work as expected.
According to the R manual: https://stat.ethz.ch/R-manual/R-devel/library/base/html/Startup.html, " if a function .First is found on the search path, it is executed as .First(). Finally, function .First.sys() in the base package is run. This calls require to attach the default packages specified by options("defaultPackages")." In .First() in profile/shell.R, we load SparkR package. This means SparkR package is loaded before default packages. If there are same names in default packages, they will overwrite those in SparkR. This is why filter() in SparkR is masked by filter() in stats, which is usually in the default package list. We need to make sure SparkR is loaded after default packages. The solution is to append SparkR to default packages, instead of loading SparkR in .First(). BTW, I'd like to discuss our policy on how to solve name conflict. Previously, we rename API names from Scala API if there is name conflict with base or other commonly-used packages. However, from long term perspective, this is not good for API stability, because we can't predict name conflicts, for example, if in the future a name added in base package conflicts with an API in SparkR? So the better policy is to keep API name same as Scala's without worrying about name conflicts. When users use SparkR, they should load SparkR as last package, so that all API names are effective. Use can explicitly use :: to refer to hidden names from other packages. If we agree on this, I can submit a JIRA issue to change back some rename API methods, for example, DataFrame.sortDF(). Author: Sun Rui <rui.sun@intel.com> Closes #5938 from sun-rui/SPARK-6812 and squashes the following commits: b569145 [Sun Rui] [SPARK-6812][SparkR] filter() on DataFrame does not work as expected.
-rw-r--r--R/pkg/inst/profile/shell.R10
1 files changed, 6 insertions, 4 deletions
diff --git a/R/pkg/inst/profile/shell.R b/R/pkg/inst/profile/shell.R
index 7a7f203115..33478d9e29 100644
--- a/R/pkg/inst/profile/shell.R
+++ b/R/pkg/inst/profile/shell.R
@@ -20,11 +20,13 @@
.libPaths(c(file.path(home, "R", "lib"), .libPaths()))
Sys.setenv(NOAWT=1)
- library(utils)
- library(SparkR)
- sc <- sparkR.init(Sys.getenv("MASTER", unset = ""))
+ # Make sure SparkR package is the last loaded one
+ old <- getOption("defaultPackages")
+ options(defaultPackages = c(old, "SparkR"))
+
+ sc <- SparkR::sparkR.init(Sys.getenv("MASTER", unset = ""))
assign("sc", sc, envir=.GlobalEnv)
- sqlCtx <- sparkRSQL.init(sc)
+ sqlCtx <- SparkR::sparkRSQL.init(sc)
assign("sqlCtx", sqlCtx, envir=.GlobalEnv)
cat("\n Welcome to SparkR!")
cat("\n Spark context is available as sc, SQL context is available as sqlCtx\n")