aboutsummaryrefslogtreecommitdiff
path: root/R/pkg/NAMESPACE
diff options
context:
space:
mode:
authorYanbo Liang <ybliang8@gmail.com>2016-09-21 20:08:28 -0700
committerYanbo Liang <ybliang8@gmail.com>2016-09-21 20:08:28 -0700
commitc133907c5d9a6e6411b896b5e0cff48b2beff09f (patch)
treef19d91c861860737b06b0fae0118ce43094cbebe /R/pkg/NAMESPACE
parent7cbe2164499e83b6c009fdbab0fbfffe89a2ecc0 (diff)
downloadspark-c133907c5d9a6e6411b896b5e0cff48b2beff09f.tar.gz
spark-c133907c5d9a6e6411b896b5e0cff48b2beff09f.tar.bz2
spark-c133907c5d9a6e6411b896b5e0cff48b2beff09f.zip
[SPARK-17577][SPARKR][CORE] SparkR support add files to Spark job and get by executors
## What changes were proposed in this pull request? Scala/Python users can add files to Spark job by submit options ```--files``` or ```SparkContext.addFile()```. Meanwhile, users can get the added file by ```SparkFiles.get(filename)```. We should also support this function for SparkR users, since they also have the requirements for some shared dependency files. For example, SparkR users can download third party R packages to driver firstly, add these files to the Spark job as dependency by this API and then each executor can install these packages by ```install.packages```. ## How was this patch tested? Add unit test. Author: Yanbo Liang <ybliang8@gmail.com> Closes #15131 from yanboliang/spark-17577.
Diffstat (limited to 'R/pkg/NAMESPACE')
-rw-r--r--R/pkg/NAMESPACE3
1 files changed, 3 insertions, 0 deletions
diff --git a/R/pkg/NAMESPACE b/R/pkg/NAMESPACE
index a5e9cbdc37..267a38c215 100644
--- a/R/pkg/NAMESPACE
+++ b/R/pkg/NAMESPACE
@@ -336,6 +336,9 @@ export("as.DataFrame",
"read.parquet",
"read.text",
"spark.lapply",
+ "spark.addFile",
+ "spark.getSparkFilesRootDirectory",
+ "spark.getSparkFiles",
"sql",
"str",
"tableToDF",