aboutsummaryrefslogtreecommitdiff
path: root/python
diff options
context:
space:
mode:
authorDjvuLee <lihu@bytedance.com>2017-01-17 10:37:29 -0800
committergatorsmile <gatorsmile@gmail.com>2017-01-17 10:37:29 -0800
commit843ec8ec42a16d6b52ad161b98bedb4f9952964b (patch)
tree22e4485f173ced21dd6ad48c8d00ad31766402bb /python
parenta23debd7bc8f85ea49c54b8cf3cd112cf0a803ff (diff)
downloadspark-843ec8ec42a16d6b52ad161b98bedb4f9952964b.tar.gz
spark-843ec8ec42a16d6b52ad161b98bedb4f9952964b.tar.bz2
spark-843ec8ec42a16d6b52ad161b98bedb4f9952964b.zip
[SPARK-19239][PYSPARK] Check parameters whether equals None when specify the column in jdbc API
## What changes were proposed in this pull request? The `jdbc` API do not check the `lowerBound` and `upperBound` when we specified the ``column``, and just throw the following exception: >```int() argument must be a string or a number, not 'NoneType'``` If we check the parameter, we can give a more friendly suggestion. ## How was this patch tested? Test using the pyspark shell, without the lowerBound and upperBound parameters. Author: DjvuLee <lihu@bytedance.com> Closes #16599 from djvulee/pysparkFix.
Diffstat (limited to 'python')
-rw-r--r--python/pyspark/sql/readwriter.py9
1 files changed, 6 insertions, 3 deletions
diff --git a/python/pyspark/sql/readwriter.py b/python/pyspark/sql/readwriter.py
index b0c51b1e99..d31f3fb8f6 100644
--- a/python/pyspark/sql/readwriter.py
+++ b/python/pyspark/sql/readwriter.py
@@ -399,7 +399,8 @@ class DataFrameReader(OptionUtils):
accessible via JDBC URL ``url`` and connection ``properties``.
Partitions of the table will be retrieved in parallel if either ``column`` or
- ``predicates`` is specified.
+ ``predicates`` is specified. ``lowerBound`, ``upperBound`` and ``numPartitions``
+ is needed when ``column`` is specified.
If both ``column`` and ``predicates`` are specified, ``column`` will be used.
@@ -429,8 +430,10 @@ class DataFrameReader(OptionUtils):
for k in properties:
jprop.setProperty(k, properties[k])
if column is not None:
- if numPartitions is None:
- numPartitions = self._spark._sc.defaultParallelism
+ assert lowerBound is not None, "lowerBound can not be None when ``column`` is specified"
+ assert upperBound is not None, "upperBound can not be None when ``column`` is specified"
+ assert numPartitions is not None, \
+ "numPartitions can not be None when ``column`` is specified"
return self._df(self._jreader.jdbc(url, table, column, int(lowerBound), int(upperBound),
int(numPartitions), jprop))
if predicates is not None: