aboutsummaryrefslogtreecommitdiff
path: root/docs/python-programming-guide.md
Commit message (Collapse)AuthorAgeFilesLines
* fix broken in link in python docsAndy Konwinski2014-05-101-1/+1
| | | | | | | | Author: Andy Konwinski <andykonwinski@gmail.com> Closes #650 from andyk/python-docs-link-fix and squashes the following commits: a1f9d51 [Andy Konwinski] fix broken in link in python docs
* SPARK-1637: Clean up examples for 1.0Sandeep2014-05-061-2/+2
| | | | | | | | | | | | | | | | | | - [x] Move all of them into subpackages of org.apache.spark.examples (right now some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib) - [x] Move Python examples into examples/src/main/python - [x] Update docs to reflect these changes Author: Sandeep <sandeep@techaddict.me> This patch had conflicts when merged, resolved by Committer: Matei Zaharia <matei@databricks.com> Closes #571 from techaddict/SPARK-1637 and squashes the following commits: 47ef86c [Sandeep] Changes based on Discussions on PR, removing use of RawTextHelper from examples 8ed2d3f [Sandeep] Docs Updated for changes, Change for java examples 5f96121 [Sandeep] Move Python examples into examples/src/main/python 0a8dd77 [Sandeep] Move all Scala Examples to org.apache.spark.examples (some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib)
* [SPARK-1549] Add Python support to spark-submitMatei Zaharia2014-05-061-14/+14
| | | | | | | | | | | | | | | | | | | | | | This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN. This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging. In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit. In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0. Author: Matei Zaharia <matei@databricks.com> Closes #664 from mateiz/py-submit and squashes the following commits: 15e9669 [Matei Zaharia] Fix some uses of path.separator property 051278c [Matei Zaharia] Small style fixes 0afe886 [Matei Zaharia] Add license headers 4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests 15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside 47c0655 [Matei Zaharia] More work to make spark-submit work with Python: d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
* SPARK-1004. PySpark on YARNSandy Ryza2014-04-291-0/+3
| | | | | | | | | | | | This reopens https://github.com/apache/incubator-spark/pull/640 against the new repo Author: Sandy Ryza <sandy@cloudera.com> Closes #30 from sryza/sandy-spark-1004 and squashes the following commits: 89889d4 [Sandy Ryza] Move unzipping py4j to the generate-resources phase so that it gets included in the jar the first time 5165a02 [Sandy Ryza] Fix docs fd0df79 [Sandy Ryza] PySpark on YARN
* [SPARK-1439, SPARK-1440] Generate unified Scaladoc across projects and JavadocsMatei Zaharia2014-04-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | I used the sbt-unidoc plugin (https://github.com/sbt/sbt-unidoc) to create a unified Scaladoc of our public packages, and generate Javadocs as well. One limitation is that I haven't found an easy way to exclude packages in the Javadoc; there is a SBT task that identifies Java sources to run javadoc on, but it's been very difficult to modify it from outside to change what is set in the unidoc package. Some SBT-savvy people should help with this. The Javadoc site also lacks package-level descriptions and things like that, so we may want to look into that. We may decide not to post these right now if it's too limited compared to the Scala one. Example of the built doc site: http://people.csail.mit.edu/matei/spark-unified-docs/ Author: Matei Zaharia <matei@databricks.com> This patch had conflicts when merged, resolved by Committer: Patrick Wendell <pwendell@gmail.com> Closes #457 from mateiz/better-docs and squashes the following commits: a63d4a3 [Matei Zaharia] Skip Java/Scala API docs for Python package 5ea1f43 [Matei Zaharia] Fix links to Java classes in Java guide, fix some JS for scrolling to anchors on page load f05abc0 [Matei Zaharia] Don't include java.lang package names 995e992 [Matei Zaharia] Skip internal packages and class names with $ in JavaDoc a14a93c [Matei Zaharia] typo 76ce64d [Matei Zaharia] Add groups to Javadoc index page, and a first package-info.java ed6f994 [Matei Zaharia] Generate JavaDoc as well, add titles, update doc site to use unified docs acb993d [Matei Zaharia] Add Unidoc plugin for the projects we want Unidoced
* SPARK-1426: Make MLlib work with NumPy versions older than 1.7Sandeep2014-04-151-3/+3
| | | | | | | | | | | Currently it requires NumPy 1.7 due to using the copyto method (http://docs.scipy.org/doc/numpy/reference/generated/numpy.copyto.html) for extracting data out of an array. Replace it with a fallback Author: Sandeep <sandeep@techaddict.me> Closes #391 from techaddict/1426 and squashes the following commits: d365962 [Sandeep] SPARK-1426: Make MLlib work with NumPy versions older than 1.7 Currently it requires NumPy 1.7 due to using the copyto method (http://docs.scipy.org/doc/numpy/reference/generated/numpy.copyto.html) for extracting data out of an array. Replace it with a fallback
* SPARK-1099: Introduce local[*] mode to infer number of coresAaron Davidson2014-04-071-3/+4
| | | | | | | | | | | This is the default mode for running spark-shell and pyspark, intended to allow users running spark for the first time to see the performance benefits of using multiple cores, while not breaking backwards compatibility for users who use "local" mode and expect exactly 1 core. Author: Aaron Davidson <aaron@databricks.com> Closes #182 from aarondav/110 and squashes the following commits: a88294c [Aaron Davidson] Rebased changes for new spark-shell a9f393e [Aaron Davidson] SPARK-1099: Introduce local[*] mode to infer number of cores
* SPARK-1421. Make MLlib work on Python 2.6Matei Zaharia2014-04-051-1/+1
| | | | | | | | | | | The reason it wasn't working was passing a bytearray to stream.write(), which is not supported in Python 2.6 but is in 2.7. (This array came from NumPy when we converted data to send it over to Java). Now we just convert those bytearrays to strings of bytes, which preserves nonprintable characters as well. Author: Matei Zaharia <matei@databricks.com> Closes #335 from mateiz/mllib-python-2.6 and squashes the following commits: f26c59f [Matei Zaharia] Update docs to no longer say we need Python 2.7 a84d6af [Matei Zaharia] SPARK-1421. Make MLlib work on Python 2.6
* SPARK-1183. Don't use "worker" to mean executorSandy Ryza2014-03-131-3/+3
| | | | | | | | | | | | Author: Sandy Ryza <sandy@cloudera.com> Closes #120 from sryza/sandy-spark-1183 and squashes the following commits: 5066a4a [Sandy Ryza] Remove "worker" in a couple comments 0bd1e46 [Sandy Ryza] Remove --am-class from usage bfc8fe0 [Sandy Ryza] Remove am-class from doc and fix yarn-alpha 607539f [Sandy Ryza] Address review comments 74d087a [Sandy Ryza] SPARK-1183. Don't use "worker" to mean executor
* Updated link for pyspark examples in docsJyotiska NK2014-02-261-1/+1
| | | | | | | | Author: Jyotiska NK <jyotiska123@gmail.com> Closes #22 from jyotiska/pyspark_docs and squashes the following commits: 426136c [Jyotiska NK] Updated link for pyspark examples
* Clarify that Python 2.7 is only needed for MLlibMatei Zaharia2014-01-151-2/+2
|
* Update Python required version to 2.7, and mention MLlib supportMatei Zaharia2014-01-121-1/+7
|
* Simplify and fix pyspark script.Patrick Wendell2014-01-071-2/+3
| | | | | | | | | | | | | | | | | | This patch removes compatibility for IPython < 1.0 but fixes the launch script and makes it much simpler. I tested this using the three commands in the PySpark documentation page: 1. IPYTHON=1 ./pyspark 2. IPYTHON_OPTS="notebook" ./pyspark 3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark There are two changes: - We rely on PYTHONSTARTUP env var to start PySpark - Removed the quotes around $IPYTHON_OPTS... having quotes gloms them together as a single argument passed to `exec` which seemed to cause ipython to fail (it instead expects them as multiple arguments).
* Code review feedbackHolden Karau2014-01-051-1/+1
|
* Merge remote-tracking branch 'apache-github/master' into remove-binariesPatrick Wendell2014-01-031-14/+14
|\ | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/DriverSuite.scala docs/python-programming-guide.md
| * pyspark -> bin/pysparkPrashant Sharma2014-01-021-14/+14
| |
* | Merge branch 'master' into spark-1002-remove-jarsPrashant Sharma2014-01-031-7/+8
|\|
| * Updated docs for SparkConf and handled review commentsMatei Zaharia2013-12-301-7/+8
| |
* | Removed sbt folder and changed docs accordinglyPrashant Sharma2014-01-021-1/+1
|/
* Add notes to python documentation about using SparkContext.setSystemProperty.Ewen Cheslack-Postava2013-10-221-0/+11
|
* Fix PySpark docs and an overly long line of code after fdbae41eMatei Zaharia2013-10-091-1/+1
|
* Update Python API featuresMatei Zaharia2013-09-101-1/+1
|
* More fair scheduler docs and property names.Matei Zaharia2013-09-081-6/+6
| | | | | Also changed uses of "job" terminology to "application" when they referred to an entire Spark program, to avoid confusion.
* Doc improvementsMatei Zaharia2013-09-011-18/+18
|
* Fix more URLs in docsMatei Zaharia2013-09-011-2/+5
|
* More updates, describing changes to recommended use of environment varsMatei Zaharia2013-08-311-2/+2
| | | | and new Python stuff
* Update some build instructions because only sbt assembly and mvn packageMatei Zaharia2013-08-291-1/+1
| | | | are now needed
* Add docs about ipythonMatei Zaharia2013-07-291-3/+31
|
* Clarify that PySpark is not supported on Windowsroot2013-07-011-3/+2
|
* Simplify Python docs a little to do substring searchMatei Zaharia2013-06-261-4/+3
|
* Some tweaks to docsMatei Zaharia2013-02-251-2/+2
|
* Added checkpointing and fault-tolerance semantics to the programming guide. ↵Tathagata Das2013-02-181-1/+1
| | | | Fixed default checkpoint interval to being a multiple of slide duration. Fixed visibility of some classes and objects to clean up docs.
* Make module help available in python shell.Patrick Wendell2013-01-301-0/+1
| | | | Also, adds a line in doc explaining how to use.
* Inclue packaging and launching pyspark in guide.Patrick Wendell2013-01-301-2/+8
| | | | It's nicer if all the commands you need are made explicit.
* Fix Python guide to say accumulators are availableMatei Zaharia2013-01-201-1/+0
|
* Add mapPartitionsWithSplit() to PySpark.Josh Rosen2013-01-081-1/+0
|
* Add `pyspark` script to replace the other scripts.Josh Rosen2013-01-011-5/+44
| | | Expand the PySpark programming guide.
* Minor documentation and style fixes for PySpark.Josh Rosen2013-01-011-2/+1
|
* Add documentation for Python API.Josh Rosen2012-12-281-0/+74