aboutsummaryrefslogtreecommitdiff
path: root/docs/configuration.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-08-31 22:17:40 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-09-01 14:13:15 -0700
commit4f422032e507d698b9c717b5228154d4527a639a (patch)
tree8344eaf7e8721aa3d9e64556694823127fb0942e /docs/configuration.md
parent4d1cb59fe1dc9181268c8e61fc042990854f83c5 (diff)
downloadspark-4f422032e507d698b9c717b5228154d4527a639a.tar.gz
spark-4f422032e507d698b9c717b5228154d4527a639a.tar.bz2
spark-4f422032e507d698b9c717b5228154d4527a639a.zip
Update docs for new package
Diffstat (limited to 'docs/configuration.md')
-rw-r--r--docs/configuration.md18
1 files changed, 9 insertions, 9 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index 1c0492efb3..55df18b6fb 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -36,13 +36,13 @@ there are at least five properties that you will commonly want to control:
</tr>
<tr>
<td>spark.serializer</td>
- <td>spark.JavaSerializer</td>
+ <td>org.apache.spark.JavaSerializer</td>
<td>
Class to use for serializing objects that will be sent over the network or need to be cached
in serialized form. The default of Java serialization works with any Serializable Java object but is
- quite slow, so we recommend <a href="tuning.html">using <code>spark.KryoSerializer</code>
+ quite slow, so we recommend <a href="tuning.html">using <code>org.apache.spark.KryoSerializer</code>
and configuring Kryo serialization</a> when speed is necessary. Can be any subclass of
- <a href="api/core/index.html#spark.Serializer"><code>spark.Serializer</code></a>.
+ <a href="api/core/index.html#org.apache.spark.Serializer"><code>org.apache.spark.Serializer</code></a>.
</td>
</tr>
<tr>
@@ -50,8 +50,8 @@ there are at least five properties that you will commonly want to control:
<td>(none)</td>
<td>
If you use Kryo serialization, set this class to register your custom classes with Kryo.
- You need to set it to a class that extends
- <a href="api/core/index.html#spark.KryoRegistrator"><code>spark.KryoRegistrator</code></a>.
+ It should be set to a class that extends
+ <a href="api/core/index.html#org.apache.spark.KryoRegistrator"><code>KryoRegistrator</code></a>.
See the <a href="tuning.html#data-serialization">tuning guide</a> for more details.
</td>
</tr>
@@ -147,10 +147,10 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td>spark.io.compression.codec</td>
- <td>spark.io.SnappyCompressionCodec</td>
+ <td>org.apache.spark.io.<br />SnappyCompressionCodec</td>
<td>
The compression codec class to use for various compressions. By default, Spark provides two
- codecs: <code>spark.io.LZFCompressionCodec</code> and <code>spark.io.SnappyCompressionCodec</code>.
+ codecs: <code>org.apache.spark.io.LZFCompressionCodec</code> and <code>org.apache.spark.io.SnappyCompressionCodec</code>.
</td>
</tr>
<tr>
@@ -171,7 +171,7 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td>spark.closure.serializer</td>
- <td>spark.JavaSerializer</td>
+ <td>org.apache.spark.JavaSerializer</td>
<td>
Serializer class to use for closures. Generally Java is fine unless your distributed functions
(e.g. map functions) reference large objects in the driver program.
@@ -198,7 +198,7 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td>spark.broadcast.factory</td>
- <td>spark.broadcast.HttpBroadcastFactory</td>
+ <td>org.apache.spark.broadcast.<br />HttpBroadcastFactory</td>
<td>
Which broadcast implementation to use.
</td>