diff options
author | Andrew Or <andrewor14@gmail.com> | 2014-05-16 22:36:23 -0700 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2014-05-16 22:36:23 -0700 |
commit | cf6cbe9f76c3b322a968c836d039fc5b70d4ce43 (patch) | |
tree | 7f1269166db1364d6f9393bd65d830a9948ce884 /examples/src/main/python/logistic_regression.py | |
parent | 4b8ec6fcfd7a7ef0857d5b21917183c181301c95 (diff) | |
download | spark-cf6cbe9f76c3b322a968c836d039fc5b70d4ce43.tar.gz spark-cf6cbe9f76c3b322a968c836d039fc5b70d4ce43.tar.bz2 spark-cf6cbe9f76c3b322a968c836d039fc5b70d4ce43.zip |
[SPARK-1824] Remove <master> from Python examples
A recent PR (#552) fixed this for all Scala / Java examples. We need to do it for python too.
Note that this blocks on #799, which makes `bin/pyspark` go through Spark submit. With only the changes in this PR, the only way to run these examples is through Spark submit. Once #799 goes in, you can use `bin/pyspark` to run them too. For example,
```
bin/pyspark examples/src/main/python/pi.py 100 --master local-cluster[4,1,512]
```
Author: Andrew Or <andrewor14@gmail.com>
Closes #802 from andrewor14/python-examples and squashes the following commits:
cf50b9f [Andrew Or] De-indent python comments (minor)
50f80b1 [Andrew Or] Remove pyFiles from SparkContext construction
c362f69 [Andrew Or] Update docs to use spark-submit for python applications
7072c6a [Andrew Or] Merge branch 'master' of github.com:apache/spark into python-examples
427a5f0 [Andrew Or] Update docs
d32072c [Andrew Or] Remove <master> from examples + update usages
Diffstat (limited to 'examples/src/main/python/logistic_regression.py')
-rwxr-xr-x | examples/src/main/python/logistic_regression.py | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/examples/src/main/python/logistic_regression.py b/examples/src/main/python/logistic_regression.py index fe5373cf79..0f22d0b323 100755 --- a/examples/src/main/python/logistic_regression.py +++ b/examples/src/main/python/logistic_regression.py @@ -47,12 +47,12 @@ def readPointBatch(iterator): return [matrix] if __name__ == "__main__": - if len(sys.argv) != 4: - print >> sys.stderr, "Usage: logistic_regression <master> <file> <iters>" + if len(sys.argv) != 3: + print >> sys.stderr, "Usage: logistic_regression <file> <iterations>" exit(-1) - sc = SparkContext(sys.argv[1], "PythonLR", pyFiles=[realpath(__file__)]) - points = sc.textFile(sys.argv[2]).mapPartitions(readPointBatch).cache() - iterations = int(sys.argv[3]) + sc = SparkContext(appName="PythonLR") + points = sc.textFile(sys.argv[1]).mapPartitions(readPointBatch).cache() + iterations = int(sys.argv[2]) # Initialize w to a random value w = 2 * np.random.ranf(size=D) - 1 |