diff options
author | Cheng Hao <hao.cheng@intel.com> | 2014-07-17 23:25:01 -0700 |
---|---|---|
committer | Reynold Xin <rxin@apache.org> | 2014-07-17 23:25:01 -0700 |
commit | 29809a6d58bfe3700350ce1988ff7083881c4382 (patch) | |
tree | efc67dddbf19e13a96484131c20a3e3c94a856a5 /examples | |
parent | 6afca2d1079bac6309a595b8e0ffc74ae93fa662 (diff) | |
download | spark-29809a6d58bfe3700350ce1988ff7083881c4382.tar.gz spark-29809a6d58bfe3700350ce1988ff7083881c4382.tar.bz2 spark-29809a6d58bfe3700350ce1988ff7083881c4382.zip |
[SPARK-2570] [SQL] Fix the bug of ClassCastException
Exception thrown when running the example of HiveFromSpark.
Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer
at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getInt(Row.scala:145)
at org.apache.spark.examples.sql.hive.HiveFromSpark$.main(HiveFromSpark.scala:45)
at org.apache.spark.examples.sql.hive.HiveFromSpark.main(HiveFromSpark.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:303)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Author: Cheng Hao <hao.cheng@intel.com>
Closes #1475 from chenghao-intel/hive_from_spark and squashes the following commits:
d4c0500 [Cheng Hao] Fix the bug of ClassCastException
Diffstat (limited to 'examples')
-rw-r--r-- | examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala b/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala index b262fabbe0..66a23fac39 100644 --- a/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala +++ b/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala @@ -42,7 +42,7 @@ object HiveFromSpark { hql("SELECT * FROM src").collect.foreach(println) // Aggregation queries are also supported. - val count = hql("SELECT COUNT(*) FROM src").collect().head.getInt(0) + val count = hql("SELECT COUNT(*) FROM src").collect().head.getLong(0) println(s"COUNT(*): $count") // The results of SQL queries are themselves RDDs and support all normal RDD functions. The |