aboutsummaryrefslogtreecommitdiff
path: root/sql/hive
diff options
context:
space:
mode:
authorzsxwing <zsxwing@gmail.com>2014-10-28 14:26:57 -0700
committerReynold Xin <rxin@databricks.com>2014-10-28 14:26:57 -0700
commitabcafcfba38d7c8dba68a5510475c5c49ae54d92 (patch)
tree26336d2770d7d9a033bbe9f1c1dea6fa5bbbae1d /sql/hive
parent47a40f60d62ea69b659959994918d4c640f39d5b (diff)
downloadspark-abcafcfba38d7c8dba68a5510475c5c49ae54d92.tar.gz
spark-abcafcfba38d7c8dba68a5510475c5c49ae54d92.tar.bz2
spark-abcafcfba38d7c8dba68a5510475c5c49ae54d92.zip
[Spark 3922] Refactor spark-core to use Utils.UTF_8
A global UTF8 constant is very helpful to handle encoding problems when converting between String and bytes. There are several solutions here: 1. Add `val UTF_8 = Charset.forName("UTF-8")` to Utils.scala 2. java.nio.charset.StandardCharsets.UTF_8 (require JDK7) 3. io.netty.util.CharsetUtil.UTF_8 4. com.google.common.base.Charsets.UTF_8 5. org.apache.commons.lang.CharEncoding.UTF_8 6. org.apache.commons.lang3.CharEncoding.UTF_8 IMO, I prefer option 1) because people can find it easily. This is a PR for option 1) and only fixes Spark Core. Author: zsxwing <zsxwing@gmail.com> Closes #2781 from zsxwing/SPARK-3922 and squashes the following commits: f974edd [zsxwing] Merge branch 'master' into SPARK-3922 2d27423 [zsxwing] Refactor spark-core to use Refactor spark-core to use Utils.UTF_8
Diffstat (limited to 'sql/hive')
0 files changed, 0 insertions, 0 deletions