diff options
author | Davies Liu <davies@databricks.com> | 2015-02-18 14:17:04 -0800 |
---|---|---|
committer | Michael Armbrust <michael@databricks.com> | 2015-02-18 14:17:04 -0800 |
commit | aa8f10e82a743d59ce87348af19c0177eb618a66 (patch) | |
tree | 87fc8bfc978015fcf3d7ff9ff2aa3717b0885f28 /sql | |
parent | f0e3b71077a6c28aba29a7a75e901a9e0911b9f0 (diff) | |
download | spark-aa8f10e82a743d59ce87348af19c0177eb618a66.tar.gz spark-aa8f10e82a743d59ce87348af19c0177eb618a66.tar.bz2 spark-aa8f10e82a743d59ce87348af19c0177eb618a66.zip |
[SPARK-5722] [SQL] [PySpark] infer int as LongType
The `int` is 64-bit on 64-bit machine (very common now), we should infer it as LongType for it in Spark SQL.
Also, LongType in SQL will come back as `int`.
Author: Davies Liu <davies@databricks.com>
Closes #4666 from davies/long and squashes the following commits:
6bc6cc4 [Davies Liu] infer int as LongType
Diffstat (limited to 'sql')
-rw-r--r-- | sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala | 1 | ||||
-rw-r--r-- | sql/core/src/main/scala/org/apache/spark/sql/execution/pythonUdfs.scala | 1 |
2 files changed, 2 insertions, 0 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala index db32fa80dd..a6cf3cd9dd 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala @@ -1130,6 +1130,7 @@ class SQLContext(@transient val sparkContext: SparkContext) def needsConversion(dataType: DataType): Boolean = dataType match { case ByteType => true case ShortType => true + case LongType => true case FloatType => true case DateType => true case TimestampType => true diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/pythonUdfs.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/pythonUdfs.scala index 69de4d168a..33632b8e82 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/pythonUdfs.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/pythonUdfs.scala @@ -186,6 +186,7 @@ object EvaluatePython { case (c: Int, ShortType) => c.toShort case (c: Long, ShortType) => c.toShort case (c: Long, IntegerType) => c.toInt + case (c: Int, LongType) => c.toLong case (c: Double, FloatType) => c.toFloat case (c, StringType) if !c.isInstanceOf[String] => c.toString |