aboutsummaryrefslogtreecommitdiff
path: root/examples/src/main/python/streaming/queue_stream.py
diff options
context:
space:
mode:
authorzsxwing <zsxwing@gmail.com>2015-08-19 18:36:01 -0700
committerTathagata Das <tathagata.das1565@gmail.com>2015-08-19 18:36:01 -0700
commit1f29d502e7ecd6faa185d70dc714f9ea3922fb6d (patch)
tree3eabe5f24204341f8d13be9bd3ae3d637b40b87b /examples/src/main/python/streaming/queue_stream.py
parent2f2686a73f5a2a53ca5b1023e0d7e0e6c9be5896 (diff)
downloadspark-1f29d502e7ecd6faa185d70dc714f9ea3922fb6d.tar.gz
spark-1f29d502e7ecd6faa185d70dc714f9ea3922fb6d.tar.bz2
spark-1f29d502e7ecd6faa185d70dc714f9ea3922fb6d.zip
[SPARK-9812] [STREAMING] Fix Python 3 compatibility issue in PySpark Streaming and some docs
This PR includes the following fixes: 1. Use `range` instead of `xrange` in `queue_stream.py` to support Python 3. 2. Fix the issue that `utf8_decoder` will return `bytes` rather than `str` when receiving an empty `bytes` in Python 3. 3. Fix the commands in docs so that the user can copy them directly to the command line. The previous commands was broken in the middle of a path, so when copying to the command line, the path would be split to two parts by the extra spaces, which forces the user to fix it manually. Author: zsxwing <zsxwing@gmail.com> Closes #8315 from zsxwing/SPARK-9812.
Diffstat (limited to 'examples/src/main/python/streaming/queue_stream.py')
-rw-r--r--examples/src/main/python/streaming/queue_stream.py4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/src/main/python/streaming/queue_stream.py b/examples/src/main/python/streaming/queue_stream.py
index dcd6a0fc6f..b3808907f7 100644
--- a/examples/src/main/python/streaming/queue_stream.py
+++ b/examples/src/main/python/streaming/queue_stream.py
@@ -36,8 +36,8 @@ if __name__ == "__main__":
# Create the queue through which RDDs can be pushed to
# a QueueInputDStream
rddQueue = []
- for i in xrange(5):
- rddQueue += [ssc.sparkContext.parallelize([j for j in xrange(1, 1001)], 10)]
+ for i in range(5):
+ rddQueue += [ssc.sparkContext.parallelize([j for j in range(1, 1001)], 10)]
# Create the QueueInputDStream and use it do some processing
inputStream = ssc.queueStream(rddQueue)