aboutsummaryrefslogtreecommitdiff
path: root/tools
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@apache.org>2014-07-30 22:40:57 -0700
committerJosh Rosen <joshrosen@apache.org>2014-07-30 22:40:57 -0700
commit4fb259353f616822c32537e3f031944a6d2a09a8 (patch)
treecd0f80898628791f4ac4435e416bd9e0da398905 /tools
parenta7c305b86b3b83645ae5ff5d3dfeafc20c443204 (diff)
downloadspark-4fb259353f616822c32537e3f031944a6d2a09a8.tar.gz
spark-4fb259353f616822c32537e3f031944a6d2a09a8.tar.bz2
spark-4fb259353f616822c32537e3f031944a6d2a09a8.zip
[SPARK-2737] Add retag() method for changing RDDs' ClassTags.
The Java API's use of fake ClassTags doesn't seem to cause any problems for Java users, but it can lead to issues when passing JavaRDDs' underlying RDDs to Scala code (e.g. in the MLlib Java API wrapper code). If we call collect() on a Scala RDD with an incorrect ClassTag, this causes ClassCastExceptions when we try to allocate an array of the wrong type (for example, see SPARK-2197). There are a few possible fixes here. An API-breaking fix would be to completely remove the fake ClassTags and require Java API users to pass java.lang.Class instances to all parallelize() calls and add returnClass fields to all Function implementations. This would be extremely verbose. Instead, this patch adds internal APIs to "repair" a Scala RDD with an incorrect ClassTag by wrapping it and overriding its ClassTag. This should be okay for cases where the Scala code that calls collect() knows what type of array should be allocated, which is the case in the MLlib wrappers. Author: Josh Rosen <joshrosen@apache.org> Closes #1639 from JoshRosen/SPARK-2737 and squashes the following commits: 572b4c8 [Josh Rosen] Replace newRDD[T] with mapPartitions(). 469d941 [Josh Rosen] Preserve partitioner in retag(). af78816 [Josh Rosen] Allow retag() to get classTag implicitly. d1d54e6 [Josh Rosen] [SPARK-2737] Add retag() method for changing RDDs' ClassTags.
Diffstat (limited to 'tools')
0 files changed, 0 insertions, 0 deletions