Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Added commented-out Google analytics code for website docs | Matei Zaharia | 2013-02-27 | 1 | -0/+16 |
| | |||||
* | Use new Spark EC2 scripts by defaultv0.7.0 | Matei Zaharia | 2013-02-26 | 2 | -8/+8 |
| | |||||
* | More doc tweaks | Matei Zaharia | 2013-02-26 | 2 | -2/+4 |
| | |||||
* | Some tweaks to docs | Matei Zaharia | 2013-02-26 | 3 | -14/+18 |
| | |||||
* | Small hack to work around multiple JARs being built by sbt package | Matei Zaharia | 2013-02-26 | 1 | -5/+6 |
| | |||||
* | Fix a problem with no hosts being counted as alive in the first job | Matei Zaharia | 2013-02-26 | 1 | -3/+3 |
| | |||||
* | Fix overly large thread names in PySpark | Matei Zaharia | 2013-02-26 | 1 | -2/+2 |
| | |||||
* | Switch docs to use Akka repo instead of Typesafe | Matei Zaharia | 2013-02-25 | 1 | -3/+3 |
| | |||||
* | Change version number to 0.7.0 | Matei Zaharia | 2013-02-25 | 3 | -3/+3 |
| | |||||
* | Merge branch 'master' of https://github.com/mesos/spark | Matei Zaharia | 2013-02-25 | 1 | -1/+1 |
|\ | |||||
| * | Merge pull request #503 from pwendell/bug-fix | Matei Zaharia | 2013-02-25 | 1 | -1/+1 |
| |\ | | | | | | | createNewSparkContext should use sparkHome/jars/environment. | ||||
| | * | createNewSparkContext should use sparkHome/jars/environment. | Patrick Wendell | 2013-02-25 | 1 | -1/+1 |
| | | | | | | | | | | | | This fixes a bug introduced by Matei's recent change. | ||||
* | | | Fix Windows script for finding examples JAR | Matei Zaharia | 2013-02-25 | 1 | -4/+4 |
|/ / | |||||
* / | Pass a code JAR to SparkContext in our examples. Fixes SPARK-594. | Matei Zaharia | 2013-02-25 | 38 | -82/+174 |
|/ | |||||
* | Merge pull request #502 from tdas/master | Matei Zaharia | 2013-02-25 | 1 | -2/+2 |
|\ | | | | | Very minor change in a testcase | ||||
| * | Changed Flume test to use the same port as other tests, so that can be ↵ | Tathagata Das | 2013-02-25 | 1 | -2/+2 |
| | | | | | | | | controlled centrally. | ||||
* | | Merge pull request #501 from tdas/master | Matei Zaharia | 2013-02-25 | 2 | -3/+17 |
|\| | | | | | Fixed bug in BlockManager and added a testcase | ||||
| * | Fixed replication bug in BlockManager | Tathagata Das | 2013-02-25 | 2 | -3/+17 |
|/ | |||||
* | Fixed something that was reported as a compile error in ScalaDoc. | Matei Zaharia | 2013-02-25 | 2 | -3/+3 |
| | | | | | | For some reason, ScalaDoc complained about no such constructor for StreamingContext; it doesn't seem like an actual Scala error but it prevented sbt publish and from working because docs weren't built. | ||||
* | Update Hadoop dependency to 1.0.4 | Matei Zaharia | 2013-02-25 | 2 | -3/+3 |
| | |||||
* | Merge pull request #500 from pwendell/streaming-docs | Tathagata Das | 2013-02-25 | 1 | -2/+2 |
|\ | | | | | Minor changes based on feedback | ||||
| * | meta-data | Patrick Wendell | 2013-02-25 | 1 | -1/+1 |
| | | |||||
| * | One more change done with TD | Patrick Wendell | 2013-02-25 | 1 | -1/+1 |
| | | |||||
| * | Minor changes based on feedback | Patrick Wendell | 2013-02-25 | 1 | -2/+2 |
| | | |||||
* | | Some tweaks to docs | Matei Zaharia | 2013-02-25 | 2 | -3/+3 |
| | | |||||
* | | Merge branch 'master' of github.com:mesos/spark | Matei Zaharia | 2013-02-25 | 1 | -4/+6 |
|\| | |||||
| * | Merge pull request #499 from pwendell/streaming-docs | Matei Zaharia | 2013-02-25 | 1 | -4/+6 |
| |\ | | | | | | | Some changes to streaming failure docs. | ||||
| | * | Some changes to streaming failure docs. | Patrick Wendell | 2013-02-25 | 1 | -4/+6 |
| | | | | | | | | | | | | | | | | | | TD gave me the go-ahead to just make these changes: - Define stateful dstream - Some minor wording fixes | ||||
* | | | Allow passing sparkHome and JARs to StreamingContext constructor | Matei Zaharia | 2013-02-25 | 8 | -22/+76 |
| | | | | | | | | | | | | | | | Also warns if spark.cleaner.ttl is not set in the version where you pass your own SparkContext. | ||||
* | | | Set spark.deploy.spreadOut to true by default in 0.7 (improves locality) | Matei Zaharia | 2013-02-25 | 1 | -1/+1 |
| | | | |||||
* | | | Some tweaks to docs | Matei Zaharia | 2013-02-25 | 3 | -9/+9 |
|/ / | |||||
* | | Add a config property for Akka lifecycle event logging | Matei Zaharia | 2013-02-25 | 1 | -2/+4 |
| | | |||||
* | | Fix compile error | Matei Zaharia | 2013-02-25 | 2 | -2/+2 |
| | | |||||
* | | Use public method sparkContext instead of protected sc in streaming examples | Matei Zaharia | 2013-02-25 | 3 | -4/+4 |
|/ | |||||
* | Change doc color scheme slightly for Spark 0.7 (to differ from 0.6) | Matei Zaharia | 2013-02-25 | 2 | -16/+16 |
| | |||||
* | Use a single setting for disabling API doc build | Matei Zaharia | 2013-02-25 | 2 | -4/+4 |
| | |||||
* | Merge branch 'master' of github.com:mesos/spark | Matei Zaharia | 2013-02-25 | 1 | -1/+1 |
|\ | |||||
| * | Merge pull request #498 from pwendell/shutup-akka | Matei Zaharia | 2013-02-25 | 1 | -1/+1 |
| |\ | | | | | | | Disable remote lifecycle logging from Akka. | ||||
| | * | Disable remote lifecycle logging from Akka. | Patrick Wendell | 2013-02-25 | 1 | -1/+1 |
| | | | | | | | | | | | | This changes the default setting to `off` for remote lifecycle events. When this is on, it is very chatty at the INFO level. It also prints out several ERROR messages sometimes when sc.stop() is called. | ||||
* | | | Change tabs to spaces | Matei Zaharia | 2013-02-25 | 1 | -15/+15 |
|/ / | |||||
* | | Get spark.default.paralellism on each call to defaultPartitioner, | Matei Zaharia | 2013-02-25 | 1 | -4/+1 |
| | | | | | | | | instead of only once, in case the user changes it across Spark uses | ||||
* | | Merge pull request #459 from stephenh/bettersplits | Matei Zaharia | 2013-02-25 | 9 | -42/+94 |
|\ \ | |/ |/| | Change defaultPartitioner to use upstream split size. | ||||
| * | Use default parallelism if its set. | Stephen Haberman | 2013-02-24 | 2 | -6/+19 |
| | | |||||
| * | Merge branch 'master' into bettersplits | Stephen Haberman | 2013-02-24 | 117 | -879/+1755 |
| |\ | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/RDD.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala core/src/test/scala/spark/ShuffleSuite.scala | ||||
| * | | Update more javadocs. | Stephen Haberman | 2013-02-16 | 2 | -15/+17 |
| | | | |||||
| * | | Tweak test names. | Stephen Haberman | 2013-02-16 | 1 | -2/+2 |
| | | | |||||
| * | | Remove fileServerSuite.txt. | Stephen Haberman | 2013-02-16 | 1 | -1/+0 |
| | | | |||||
| * | | Update default.parallelism docs, have StandaloneSchedulerBackend use it. | Stephen Haberman | 2013-02-16 | 8 | -28/+43 |
| | | | | | | | | | | | | | | | | | | Only brand new RDDs (e.g. parallelize and makeRDD) now use default parallelism, everything else uses their largest parent's partitioner or partition size. | ||||
| * | | Change defaultPartitioner to use upstream split size. | Stephen Haberman | 2013-02-10 | 3 | -6/+29 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously it used the SparkContext.defaultParallelism, which occassionally ended up being a very bad guess. Looking at upstream RDDs seems to make better use of the context. Also sorted the upstream RDDs by partition size first, as if we have a hugely-partitioned RDD and tiny-partitioned RDD, it is unlikely we want the resulting RDD to be tiny-partitioned. | ||||
* | | | Merge pull request #497 from ScrapCodes/dep-resolution-fix | Matei Zaharia | 2013-02-25 | 1 | -1/+3 |
|\ \ \ | | | | | | | | | Moving akka dependency resolver to shared. |