aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/tests.py
diff options
context:
space:
mode:
authorDavies Liu <davies@databricks.com>2015-02-02 19:16:27 -0800
committerTathagata Das <tathagata.das1565@gmail.com>2015-02-02 19:16:27 -0800
commit0561c4544967fb853419f32e014fac9b8879b0db (patch)
tree54f64d9481de296bcb7676f82306896007b489b2 /python/pyspark/tests.py
parent554403fd913685da879cf6a280c58a9fad19448a (diff)
downloadspark-0561c4544967fb853419f32e014fac9b8879b0db.tar.gz
spark-0561c4544967fb853419f32e014fac9b8879b0db.tar.bz2
spark-0561c4544967fb853419f32e014fac9b8879b0db.zip
[SPARK-5154] [PySpark] [Streaming] Kafka streaming support in Python
This PR brings the Python API for Spark Streaming Kafka data source. ``` class KafkaUtils(__builtin__.object) | Static methods defined here: | | createStream(ssc, zkQuorum, groupId, topics, storageLevel=StorageLevel(True, True, False, False, 2), keyDecoder=<function utf8_decoder>, valueDecoder=<function utf8_decoder>) | Create an input stream that pulls messages from a Kafka Broker. | | :param ssc: StreamingContext object | :param zkQuorum: Zookeeper quorum (hostname:port,hostname:port,..). | :param groupId: The group id for this consumer. | :param topics: Dict of (topic_name -> numPartitions) to consume. | Each partition is consumed in its own thread. | :param storageLevel: RDD storage level. | :param keyDecoder: A function used to decode key | :param valueDecoder: A function used to decode value | :return: A DStream object ``` run the example: ``` bin/spark-submit --driver-class-path external/kafka-assembly/target/scala-*/spark-streaming-kafka-assembly-*.jar examples/src/main/python/streaming/kafka_wordcount.py localhost:2181 test ``` Author: Davies Liu <davies@databricks.com> Author: Tathagata Das <tdas@databricks.com> Closes #3715 from davies/kafka and squashes the following commits: d93bfe0 [Davies Liu] Update make-distribution.sh 4280d04 [Davies Liu] address comments e6d0427 [Davies Liu] Merge branch 'master' of github.com:apache/spark into kafka f257071 [Davies Liu] add tests for null in RDD 23b039a [Davies Liu] address comments 9af51c4 [Davies Liu] Merge branch 'kafka' of github.com:davies/spark into kafka a74da87 [Davies Liu] address comments dc1eed0 [Davies Liu] Update kafka_wordcount.py 31e2317 [Davies Liu] Update kafka_wordcount.py 370ba61 [Davies Liu] Update kafka.py 97386b3 [Davies Liu] address comment 2c567a5 [Davies Liu] update logging and comment 33730d1 [Davies Liu] Merge branch 'master' of github.com:apache/spark into kafka adeeb38 [Davies Liu] Merge pull request #3 from tdas/kafka-python-api aea8953 [Tathagata Das] Kafka-assembly for Python API eea16a7 [Davies Liu] refactor f6ce899 [Davies Liu] add example and fix bugs 98c8d17 [Davies Liu] fix python style 5697a01 [Davies Liu] bypass decoder in scala 048dbe6 [Davies Liu] fix python style 75d485e [Davies Liu] add mqtt 07923c4 [Davies Liu] support kafka in Python
Diffstat (limited to 'python/pyspark/tests.py')
-rw-r--r--python/pyspark/tests.py10
1 files changed, 9 insertions, 1 deletions
diff --git a/python/pyspark/tests.py b/python/pyspark/tests.py
index fef6c92875..c7d0622d65 100644
--- a/python/pyspark/tests.py
+++ b/python/pyspark/tests.py
@@ -47,9 +47,10 @@ else:
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
+from pyspark.rdd import RDD
from pyspark.files import SparkFiles
from pyspark.serializers import read_int, BatchedSerializer, MarshalSerializer, PickleSerializer, \
- CloudPickleSerializer, CompressedSerializer
+ CloudPickleSerializer, CompressedSerializer, UTF8Deserializer, NoOpSerializer
from pyspark.shuffle import Aggregator, InMemoryMerger, ExternalMerger, ExternalSorter
from pyspark.sql import SQLContext, IntegerType, Row, ArrayType, StructType, StructField, \
UserDefinedType, DoubleType
@@ -716,6 +717,13 @@ class RDDTests(ReusedPySparkTestCase):
wr_s21 = rdd.sample(True, 0.4, 21).collect()
self.assertNotEqual(set(wr_s11), set(wr_s21))
+ def test_null_in_rdd(self):
+ jrdd = self.sc._jvm.PythonUtils.generateRDDWithNull(self.sc._jsc)
+ rdd = RDD(jrdd, self.sc, UTF8Deserializer())
+ self.assertEqual([u"a", None, u"b"], rdd.collect())
+ rdd = RDD(jrdd, self.sc, NoOpSerializer())
+ self.assertEqual(["a", None, "b"], rdd.collect())
+
def test_multiple_python_java_RDD_conversions(self):
# Regression test for SPARK-5361
data = [