aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorReynold Xin <rxin@apache.org>2014-01-11 12:07:55 -0800
committerReynold Xin <rxin@apache.org>2014-01-11 12:07:55 -0800
commitee6e7f9b8cc56985787546882fba291cf9ad7667 (patch)
tree0cf54c5a30c84c974b5fba839a9ec6cd0bc07f68 /docs
parent4216178d5e81fad911b69e75f5a272e63d3d208a (diff)
parent59b03e015d581bbab74f1fe33a3ec1fd7840c3db (diff)
downloadspark-ee6e7f9b8cc56985787546882fba291cf9ad7667.tar.gz
spark-ee6e7f9b8cc56985787546882fba291cf9ad7667.tar.bz2
spark-ee6e7f9b8cc56985787546882fba291cf9ad7667.zip
Merge pull request #359 from ScrapCodes/clone-writables
We clone hadoop key and values by default and reuse objects if asked to. We try to clone for most common types of writables and we call WritableUtils.clone otherwise intention is to optimize, for example for NullWritable there is no need and for Long, int and String creating a new object with value set would be faster than doing copy on object hopefully. There is another way to do this PR where we ask for both key and values whether to clone them or not, but could not think of a use case for it except either of them is actually a NullWritable for which I have already worked around. So thought that would be unnecessary.
Diffstat (limited to 'docs')
0 files changed, 0 insertions, 0 deletions