aboutsummaryrefslogtreecommitdiff
path: root/examples/src/main/python/logistic_regression.py
diff options
context:
space:
mode:
authorZheng RuiFeng <ruifengz@foxmail.com>2016-05-20 16:40:33 -0700
committerAndrew Or <andrew@databricks.com>2016-05-20 16:40:33 -0700
commit127bf1bb07967e2e4f99ad7abaa7f6fab3b3f407 (patch)
treea127031cd361df2f1d895cb11489f8e183c76f73 /examples/src/main/python/logistic_regression.py
parent06c9f520714e07259c6f8ce6f9ea5a230a278cb5 (diff)
downloadspark-127bf1bb07967e2e4f99ad7abaa7f6fab3b3f407.tar.gz
spark-127bf1bb07967e2e4f99ad7abaa7f6fab3b3f407.tar.bz2
spark-127bf1bb07967e2e4f99ad7abaa7f6fab3b3f407.zip
[SPARK-15031][EXAMPLE] Use SparkSession in examples
## What changes were proposed in this pull request? Use `SparkSession` according to [SPARK-15031](https://issues.apache.org/jira/browse/SPARK-15031) `MLLLIB` is not recommended to use now, so examples in `MLLIB` are ignored in this PR. `StreamingContext` can not be directly obtained from `SparkSession`, so example in `Streaming` are ignored too. cc andrewor14 ## How was this patch tested? manual tests with spark-submit Author: Zheng RuiFeng <ruifengz@foxmail.com> Closes #13164 from zhengruifeng/use_sparksession_ii.
Diffstat (limited to 'examples/src/main/python/logistic_regression.py')
-rwxr-xr-xexamples/src/main/python/logistic_regression.py13
1 files changed, 9 insertions, 4 deletions
diff --git a/examples/src/main/python/logistic_regression.py b/examples/src/main/python/logistic_regression.py
index 7d33be7e81..01c938454b 100755
--- a/examples/src/main/python/logistic_regression.py
+++ b/examples/src/main/python/logistic_regression.py
@@ -27,7 +27,7 @@ from __future__ import print_function
import sys
import numpy as np
-from pyspark import SparkContext
+from pyspark.sql import SparkSession
D = 10 # Number of dimensions
@@ -55,8 +55,13 @@ if __name__ == "__main__":
Please refer to examples/src/main/python/ml/logistic_regression_with_elastic_net.py
to see how ML's implementation is used.""", file=sys.stderr)
- sc = SparkContext(appName="PythonLR")
- points = sc.textFile(sys.argv[1]).mapPartitions(readPointBatch).cache()
+ spark = SparkSession\
+ .builder\
+ .appName("PythonLR")\
+ .getOrCreate()
+
+ points = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])\
+ .mapPartitions(readPointBatch).cache()
iterations = int(sys.argv[2])
# Initialize w to a random value
@@ -80,4 +85,4 @@ if __name__ == "__main__":
print("Final w: " + str(w))
- sc.stop()
+ spark.stop()