aboutsummaryrefslogtreecommitdiff
path: root/docs
Commit message (Collapse)AuthorAgeFilesLines
* changed the example links in the scala-programming-guidfengdong2013-12-181-1/+1
|
* Fixed the example link.fengdong2013-12-181-1/+1
|
* Merge pull request #251 from pwendell/masterReynold Xin2013-12-141-5/+7
|\ | | | | | | | | | | Fix list rendering in YARN markdown docs. This is some minor clean-up which makes the list render correctly.
| * Fix list rendering in YARN markdown docs.Patrick Wendell2013-12-101-5/+7
| |
* | A few corrections to documentation.Prashant Sharma2013-12-121-7/+7
| |
* | Merge branch 'master' into akka-bug-fixPrashant Sharma2013-12-1111-18/+69
|\| | | | | | | | | | | | | | | | | | | Conflicts: core/pom.xml core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala pom.xml project/SparkBuild.scala streaming/pom.xml yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
| * Small fixPatrick Wendell2013-12-071-1/+1
| |
| * Adding HDP 2.0 versionPatrick Wendell2013-12-071-1/+2
| |
| * Various broken links in documentationPatrick Wendell2013-12-076-10/+10
| |
| * Merge pull request #240 from pwendell/masterPatrick Wendell2013-12-071-4/+4
| |\ | | | | | | | | | SPARK-917 Improve API links in nav bar
| | * SPARK-917 Improve API links in nav barPatrick Wendell2013-12-071-4/+4
| | |
| * | Correct spellling error in configuration.mdAaron Davidson2013-12-071-1/+1
| |/
| * Minor formatting fix in config filePatrick Wendell2013-12-061-1/+0
| |
| * Merge pull request #236 from pwendell/shuffle-docsPatrick Wendell2013-12-061-1/+1
| |\ | | | | | | | | | Adding disclaimer for shuffle file consolidation
| | * Adding disclaimer for shuffle file consolidationPatrick Wendell2013-12-061-1/+1
| | |
| * | Minor doc fixes and updating READMEPatrick Wendell2013-12-062-2/+4
| |/
| * more docsAli Ghodsi2013-12-063-3/+5
| |
| * Updated documentation about the YARN v2.2 build processAli Ghodsi2013-12-063-1/+13
| |
| * Merge pull request #228 from pwendell/masterPatrick Wendell2013-12-051-1/+36
| |\ | | | | | | | | | Document missing configs and set shuffle consolidation to false.
| | * Small changes from Matei reviewPatrick Wendell2013-12-041-2/+2
| | |
| | * Document missing configs and set shuffle consolidation to false.Patrick Wendell2013-12-041-1/+36
| | |
| * | Typo: applicatonAndrew Ash2013-12-041-2/+2
| |/
* | Merge branch 'master' into wip-scala-2.10Prashant Sharma2013-11-272-3/+27
|\| | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala core/src/main/scala/org/apache/spark/rdd/MapPartitionsRDD.scala core/src/main/scala/org/apache/spark/rdd/MapPartitionsWithContextRDD.scala core/src/main/scala/org/apache/spark/rdd/RDD.scala python/pyspark/rdd.py
| * Update tuning.mdAndrew Ash2013-11-251-1/+2
| | | | | | Clarify when serializer is used based on recent user@ mailing list discussion.
| * Merge pull request #101 from colorant/yarn-client-schedulerMatei Zaharia2013-11-251-2/+25
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For SPARK-527, Support spark-shell when running on YARN sync to trunk and resubmit here In current YARN mode approaching, the application is run in the Application Master as a user program thus the whole spark context is on remote. This approaching won't support application that involve local interaction and need to be run on where it is launched. So In this pull request I have a YarnClientClusterScheduler and backend added. With this scheduler, the user application is launched locally,While the executor will be launched by YARN on remote nodes with a thin AM which only launch the executor and monitor the Driver Actor status, so that when client app is done, it can finish the YARN Application as well. This enables spark-shell to run upon YARN. This also enable other Spark applications to have the spark context to run locally with a master-url "yarn-client". Thus e.g. SparkPi could have the result output locally on console instead of output in the log of the remote machine where AM is running on. Docs also updated to show how to use this yarn-client mode.
| | * Add YarnClientClusterScheduler and Backend.Raymond Liu2013-11-221-2/+25
| | | | | | | | | | | | | | | | | | | | | With this scheduler, the user application is launched locally, While the executor will be launched by YARN on remote nodes. This enables spark-shell to run upon YARN.
* | | Improvements from the review comments and followed Boy Scout Rule.Prashant Sharma2013-11-271-2/+2
| | |
* | | Documenting the newly added spark properties.Prashant Sharma2013-11-261-1/+22
| | |
* | | Merge branch 'master' into scala-2.10-wipPrashant Sharma2013-11-252-1/+2
|\| | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/rdd/RDD.scala project/SparkBuild.scala
| * | Merge pull request #151 from russellcardullo/add-graphite-sinkMatei Zaharia2013-11-241-0/+1
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add graphite sink for metrics This adds a metrics sink for graphite. The sink must be configured with the host and port of a graphite node and optionally may be configured with a prefix that will be prepended to all metrics that are sent to graphite.
| | * | Add graphite sink for metricsRussell Cardullo2013-11-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a metrics sink for graphite. The sink must be configured with the host and port of a graphite node and optionally may be configured with a prefix that will be prepended to all metrics that are sent to graphite.
| * | | Fix Kryo Serializer buffer inconsistencyNeal Wiggins2013-11-201-1/+1
| | |/ | |/| | | | The documentation here is inconsistent with the coded default and other documentation.
* | | Merge branch 'master' of github.com:apache/incubator-spark into scala-2.10-tempPrashant Sharma2013-11-211-0/+2
|\| | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/util/collection/PrimitiveVector.scala streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
| * | Impove Spark on Yarn Error handlingtgravescs2013-11-191-0/+2
| | |
| * | Fixed typos in the CDH4 distributions version codes.RIA-pierre-borckmans2013-11-141-2/+2
| | |
* | | Various merge correctionsAaron Davidson2013-11-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've diff'd this patch against my own -- since they were both created independently, this means that two sets of eyes have gone over all the merge conflicts that were created, so I'm feeling significantly more confident in the resulting PR. @rxin has looked at the changes to the repl and is resoundingly confident that they are correct.
* | | Merge branch 'master' into scala-2.10Raymond Liu2013-11-141-0/+1
|\| |
| * | Allow spark on yarn to be run from HDFS. Allows the spark.jar, app.jar, and ↵tgravescs2013-11-041-0/+1
| |/ | | | | | | log4j.properties to be put into hdfs.
* | Merge branch 'master' into scala-2.10Raymond Liu2013-11-139-11/+127
|\|
| * fix persistent-hdfsFabrizio (Misto) Milo2013-11-011-1/+1
| |
| * Document all the URIs for addJar/addFileEvan Chan2013-11-011-1/+13
| |
| * Add a `repartition` operator.Patrick Wendell2013-10-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds an operator called repartition with more straightforward semantics than the current `coalesce` operator. There are a few use cases where this operator is useful: 1. If a user wants to increase the number of partitions in the RDD. This is more common now with streaming. E.g. a user is ingesting data on one node but they want to add more partitions to ensure parallelism of subsequent operations across threads or the cluster. Right now they have to call rdd.coalesce(numSplits, shuffle=true) - that's super confusing. 2. If a user has input data where the number of partitions is not known. E.g. > sc.textFile("some file").coalesce(50).... This is both vague semantically (am I growing or shrinking this RDD) but also, may not work correctly if the base RDD has fewer than 50 partitions. The new operator forces shuffles every time, so it will always produce exactly the number of new partitions. It also throws an exception rather than silently not-working if a bad input is passed. I am currently adding streaming tests (requires refactoring some of the test suite to allow testing at partition granularity), so this is not ready for merge yet. But feedback is welcome.
| * Merge pull request #97 from ewencp/pyspark-system-propertiesMatei Zaharia2013-10-221-0/+11
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add classmethod to SparkContext to set system properties. Add a new classmethod to SparkContext to set system properties like is possible in Scala/Java. Unlike the Java/Scala implementations, there's no access to System until the JVM bridge is created. Since SparkContext handles that, move the initialization of the JVM connection to a separate classmethod that can safely be called repeatedly as long as the same instance (or no instance) is provided.
| | * Add notes to python documentation about using SparkContext.setSystemProperty.Ewen Cheslack-Postava2013-10-221-0/+11
| | |
| * | Docs: Fix links to RDD API documentationAaron Davidson2013-10-221-3/+3
| |/
| * Merge pull request #76 from pwendell/masterReynold Xin2013-10-181-1/+1
| |\ | | | | | | | | | | | | | | | | | | Clarify compression property. Clarifies that this governs compression of internal data, not input data or output data.
| | * Clarify compression property.Patrick Wendell2013-10-181-1/+1
| | | | | | | | | | | | | | | Clarifies that this governs compression of internal data, not input data or output data.
| * | Code styling. Updated doc.Mosharaf Chowdhury2013-10-171-0/+8
| |/
| * Merge remote-tracking branch 'tgravescs/sparkYarnDistCache'Matei Zaharia2013-10-101-1/+8
| |\ | | | | | | | | | | | | | | | | | | | | | Closes #11 Conflicts: docs/running-on-yarn.md yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
| | * Adding in the --addJars option to make SparkContext.addJar work on yarn and ↵tgravescs2013-10-031-0/+2
| | | | | | | | | | | | | | | | | | cleanup the classpaths