aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/sql/types.py
diff options
context:
space:
mode:
authorLiang-Chi Hsieh <simonh@tw.ibm.com>2016-05-18 11:18:33 -0700
committerDavies Liu <davies.liu@gmail.com>2016-05-18 11:18:33 -0700
commit3d1e67f903ab3512fcad82b94b1825578f8117c9 (patch)
tree6f392bcbfbf0836ce44bccd95fab53a4d27e4b6b /python/pyspark/sql/types.py
parent8fb1d1c7f3ed1b62625052a532b7388ebec71bbf (diff)
downloadspark-3d1e67f903ab3512fcad82b94b1825578f8117c9.tar.gz
spark-3d1e67f903ab3512fcad82b94b1825578f8117c9.tar.bz2
spark-3d1e67f903ab3512fcad82b94b1825578f8117c9.zip
[SPARK-15342] [SQL] [PYSPARK] PySpark test for non ascii column name does not actually test with unicode column name
## What changes were proposed in this pull request? The PySpark SQL `test_column_name_with_non_ascii` wants to test non-ascii column name. But it doesn't actually test it. We need to construct an unicode explicitly using `unicode` under Python 2. ## How was this patch tested? Existing tests. Author: Liang-Chi Hsieh <simonh@tw.ibm.com> Closes #13134 from viirya/correct-non-ascii-colname-pytest.
Diffstat (limited to 'python/pyspark/sql/types.py')
-rw-r--r--python/pyspark/sql/types.py3
1 files changed, 2 insertions, 1 deletions
diff --git a/python/pyspark/sql/types.py b/python/pyspark/sql/types.py
index 30ab130f29..7d8d0230b4 100644
--- a/python/pyspark/sql/types.py
+++ b/python/pyspark/sql/types.py
@@ -27,7 +27,7 @@ from array import array
if sys.version >= "3":
long = int
- unicode = str
+ basestring = unicode = str
from py4j.protocol import register_input_converter
from py4j.java_gateway import JavaClass
@@ -401,6 +401,7 @@ class StructField(DataType):
False
"""
assert isinstance(dataType, DataType), "dataType should be DataType"
+ assert isinstance(name, basestring), "field name should be string"
if not isinstance(name, str):
name = name.encode('utf-8')
self.name = name