aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorTathagata Das <tathagata.das1565@gmail.com>2013-01-23 01:10:26 -0800
committerTathagata Das <tathagata.das1565@gmail.com>2013-01-23 01:10:26 -0800
commit155f31398dc83ecb88b4b3e07849a2a8a0a6592f (patch)
tree20ecfc301450c61387da166d3cfdf07d3503b4b5 /docs
parent5e11f1e51f17113abb8d3a5bc261af5ba5ffce94 (diff)
downloadspark-155f31398dc83ecb88b4b3e07849a2a8a0a6592f.tar.gz
spark-155f31398dc83ecb88b4b3e07849a2a8a0a6592f.tar.bz2
spark-155f31398dc83ecb88b4b3e07849a2a8a0a6592f.zip
Made StorageLevel constructor private, and added StorageLevels.create() to the Java API. Updates scala and java programming guides.
Diffstat (limited to 'docs')
-rw-r--r--docs/java-programming-guide.md3
-rw-r--r--docs/scala-programming-guide.md3
2 files changed, 4 insertions, 2 deletions
diff --git a/docs/java-programming-guide.md b/docs/java-programming-guide.md
index 188ca4995e..37a906ea1c 100644
--- a/docs/java-programming-guide.md
+++ b/docs/java-programming-guide.md
@@ -75,7 +75,8 @@ class has a single abstract method, `call()`, that must be implemented.
## Storage Levels
RDD [storage level](scala-programming-guide.html#rdd-persistence) constants, such as `MEMORY_AND_DISK`, are
-declared in the [spark.api.java.StorageLevels](api/core/index.html#spark.api.java.StorageLevels) class.
+declared in the [spark.api.java.StorageLevels](api/core/index.html#spark.api.java.StorageLevels) class. To
+define your own storage level, you can use StorageLevels.create(...).
# Other Features
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index 7350eca837..301b330a79 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -301,7 +301,8 @@ We recommend going through the following process to select one:
* Use the replicated storage levels if you want fast fault recovery (e.g. if using Spark to serve requests from a web
application). *All* the storage levels provide full fault tolerance by recomputing lost data, but the replicated ones
let you continue running tasks on the RDD without waiting to recompute a lost partition.
-
+
+If you want to define your own storage level (say, with replication factor of 3 instead of 2), then use the function factor method `apply()` of the [`StorageLevel`](api/core/index.html#spark.storage.StorageLevel$) singleton object.
# Shared Variables