aboutsummaryrefslogtreecommitdiff
path: root/docs/configuration.md
diff options
context:
space:
mode:
authorIan O Connell <ioconnell@twitter.com>2014-07-23 16:30:06 -0700
committerMichael Armbrust <michael@databricks.com>2014-07-23 16:30:11 -0700
commitefdaeb111917dd0314f1d00ee8524bed1e2e21ca (patch)
tree6f7b521030d1e3eec44b0b64964e88093c0281e1 /docs/configuration.md
parent1871574a240e6f28adeb6bc8accc98c851cafae5 (diff)
downloadspark-efdaeb111917dd0314f1d00ee8524bed1e2e21ca.tar.gz
spark-efdaeb111917dd0314f1d00ee8524bed1e2e21ca.tar.bz2
spark-efdaeb111917dd0314f1d00ee8524bed1e2e21ca.zip
[SPARK-2102][SQL][CORE] Add option for kryo registration required and use a resource pool in Spark SQL for Kryo instances.
Author: Ian O Connell <ioconnell@twitter.com> Closes #1377 from ianoc/feature/SPARK-2102 and squashes the following commits: 5498566 [Ian O Connell] Docs update suggested by Patrick 20e8555 [Ian O Connell] Slight style change f92c294 [Ian O Connell] Add docs for new KryoSerializer option f3735c8 [Ian O Connell] Add using a kryo resource pool for the SqlSerializer 4e5c342 [Ian O Connell] Register the SparkConf for kryo, it gets swept into serialization 665805a [Ian O Connell] Add a spark.kryo.registrationRequired option for configuring the Kryo Serializer
Diffstat (limited to 'docs/configuration.md')
-rw-r--r--docs/configuration.md19
1 files changed, 15 insertions, 4 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index a70007c165..02af461267 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -389,6 +389,17 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
+ <td><code>spark.kryo.registrationRequired</code></td>
+ <td>false</td>
+ <td>
+ Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception
+ if an unregistered class is serialized. If set to false (the default), Kryo will write
+ unregistered class names along with each object. Writing class names can cause
+ significant performance overhead, so enabling this option can enforce strictly that a
+ user has not omitted classes from registration.
+ </td>
+</tr>
+<tr>
<td><code>spark.kryoserializer.buffer.mb</code></td>
<td>2</td>
<td>
@@ -497,9 +508,9 @@ Apart from these, the following properties are also available, and may be useful
<tr>
<td>spark.hadoop.validateOutputSpecs</td>
<td>true</td>
- <td>If set to true, validates the output specification (e.g. checking if the output directory already exists)
- used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing
- output directories. We recommend that users do not disable this except if trying to achieve compatibility with
+ <td>If set to true, validates the output specification (e.g. checking if the output directory already exists)
+ used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing
+ output directories. We recommend that users do not disable this except if trying to achieve compatibility with
previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand.</td>
</tr>
</table>
@@ -861,7 +872,7 @@ Apart from these, the following properties are also available, and may be useful
</table>
#### Cluster Managers
-Each cluster manager in Spark has additional configuration options. Configurations
+Each cluster manager in Spark has additional configuration options. Configurations
can be found on the pages for each mode:
* [YARN](running-on-yarn.html#configuration)