Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Merge pull request #898 from ilikerps/660 | Matei Zaharia | 2013-09-08 | 1 | -3/+3 |
|\ | | | | | SPARK-660: Add StorageLevel support in Python | ||||
| * | Export StorageLevel and refactor | Aaron Davidson | 2013-09-07 | 1 | -3/+3 |
| | | |||||
| * | Remove reflection, hard-code StorageLevels | Aaron Davidson | 2013-09-07 | 1 | -11/+0 |
| | | | | | | | | | | | | | | | | | | | | | | The sc.StorageLevel -> StorageLevel pathway is a bit janky, but otherwise the shell would have to call a private method of SparkContext. Having StorageLevel available in sc also doesn't seem like the end of the world. There may be a better solution, though. As for creating the StorageLevel object itself, this seems to be the best way in Python 2 for creating singleton, enum-like objects: http://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python | ||||
| * | Memoize StorageLevels read from JVM | Aaron Davidson | 2013-09-06 | 1 | -1/+1 |
| | | |||||
| * | SPARK-660: Add StorageLevel support in Python | Aaron Davidson | 2013-09-05 | 1 | -0/+11 |
| | | | | | | | | | | It uses reflection... I am not proud of that fact, but it at least ensures compatibility (sans refactoring of the StorageLevel stuff). | ||||
* | | Fixed the bug that ResultTask was not properly deserializing outputId. | Reynold Xin | 2013-09-07 | 1 | -2/+2 |
| | | |||||
* | | Hot fix to resolve the compilation error caused by SPARK-821. | Reynold Xin | 2013-09-06 | 1 | -1/+1 |
| | | |||||
* | | Merge pull request #895 from ilikerps/821 | Patrick Wendell | 2013-09-05 | 7 | -7/+102 |
|\ \ | | | | | | | SPARK-821: Don't cache results when action run locally on driver | ||||
| * | | Reynold's second round of comments | Aaron Davidson | 2013-09-05 | 2 | -17/+19 |
| | | | |||||
| * | | Add unit test and address comments | Aaron Davidson | 2013-09-05 | 5 | -6/+98 |
| | | | |||||
| * | | SPARK-821: Don't cache results when action run locally on driver | Aaron Davidson | 2013-09-05 | 4 | -4/+5 |
| |/ | | | | | | | | | | | Caching the results of local actions (e.g., rdd.first()) causes the driver to store entire partitions in its own memory, which may be highly constrained. This patch simply makes the CacheManager avoid caching the result of all locally-run computations. | ||||
* | | Merge pull request #891 from xiajunluan/SPARK-864 | Matei Zaharia | 2013-09-05 | 1 | -1/+8 |
|\ \ | | | | | | | [SPARK-864]DAGScheduler Exception if we delete Worker and StandaloneExecutorBackend then add Worker | ||||
| * | | Fix bug SPARK-864 | Andrew xia | 2013-09-05 | 1 | -1/+8 |
| | | | |||||
* | | | Merge pull request #893 from ilikerps/master | Patrick Wendell | 2013-09-04 | 1 | -0/+92 |
|\ \ \ | | |/ | |/| | SPARK-884: Add unit test to validate Spark JSON output | ||||
| * | | Fix line over 100 chars | Aaron Davidson | 2013-09-04 | 1 | -2/+2 |
| | | | |||||
| * | | Address Patrick's comments | Aaron Davidson | 2013-09-04 | 1 | -8/+15 |
| | | | |||||
| * | | SPARK-884: Add unit test to validate Spark JSON output | Aaron Davidson | 2013-09-04 | 1 | -0/+85 |
| |/ | | | | | | | | | This unit test simply validates that the outputs of the JsonProtocol methods are syntactically valid JSON. | ||||
* | | Minor spacing fix | Patrick Wendell | 2013-09-03 | 1 | -2/+4 |
| | | |||||
* | | Merge pull request #878 from tgravescs/yarnUILink | Patrick Wendell | 2013-09-03 | 9 | -26/+36 |
|\ \ | | | | | | | Link the Spark UI up to the Yarn UI | ||||
| * | | Update based on review comments. Change function to prependBaseUri and fix ↵ | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 4 | -24/+23 |
| | | | | | | | | | | | | formatting. | ||||
| * | | Review comment changes and update to org.apache packaging | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 6 | -32/+20 |
| | | | |||||
| * | | Merge remote-tracking branch 'mesos/master' into yarnUILink | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 333 | -1491/+2664 |
| |\| | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/ui/UIUtils.scala core/src/main/scala/org/apache/spark/ui/jobs/PoolTable.scala core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala docs/running-on-yarn.md | ||||
| * | | fix up minor things | Y.CORP.YAHOO.COM\tgraves | 2013-08-30 | 2 | -5/+6 |
| | | | |||||
| * | | Link the Spark UI to the Yarn UI | Y.CORP.YAHOO.COM\tgraves | 2013-08-30 | 6 | -27/+50 |
| | | | |||||
* | | | Merge pull request #889 from alig/master | Matei Zaharia | 2013-09-03 | 3 | -5/+22 |
|\ \ \ | |_|/ |/| | | Return the port the WebUI is bound to (useful if port 0 was used) | ||||
| * | | Merge branch 'master' of https://github.com/alig/spark | Ali Ghodsi | 2013-09-03 | 2 | -5/+5 |
| |\ \ | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/deploy/master/Master.scala | ||||
| | * | | Sort order of imports to match project guidelines | Ali Ghodsi | 2013-09-02 | 1 | -5/+5 |
| | | | | |||||
| | * | | Reynold's comment fixed | Ali Ghodsi | 2013-09-02 | 1 | -1/+1 |
| | | | | |||||
| * | | | Using configured akka timeouts | Ali Ghodsi | 2013-09-03 | 1 | -3/+5 |
| |/ / | |||||
| * | | Brushing the code up slightly | Ali Ghodsi | 2013-09-02 | 2 | -4/+4 |
| | | | |||||
| * | | Enabling getting the actual WEBUI port | Ali Ghodsi | 2013-09-02 | 3 | -4/+19 |
| | | | |||||
* | | | Add missing license headers found with RAT | Matei Zaharia | 2013-09-02 | 16 | -3/+256 |
|/ / | |||||
* | | Fix test | Matei Zaharia | 2013-09-02 | 1 | -1/+1 |
| | | |||||
* | | Fix spark.io.compression.codec and change default codec to LZF | Matei Zaharia | 2013-09-02 | 1 | -8/+4 |
| | | |||||
* | | Allow PySpark to launch worker.py directly on Windows | Matei Zaharia | 2013-09-01 | 1 | -8/+99 |
| | | |||||
* | | Run script fixes for Windows after package & assembly change | Matei Zaharia | 2013-09-01 | 1 | -11/+17 |
| | | |||||
* | | Move some classes to more appropriate packages: | Matei Zaharia | 2013-09-01 | 128 | -284/+303 |
| | | | | | | | | | | | | * RDD, *RDDFunctions -> org.apache.spark.rdd * Utils, ClosureCleaner, SizeEstimator -> org.apache.spark.util * JavaSerializer, KryoSerializer -> org.apache.spark.serializer | ||||
* | | Fix some URLs | Matei Zaharia | 2013-09-01 | 1 | -1/+1 |
| | | |||||
* | | Remove shutdown hook to stop jetty; this is unnecessary for releasing | Matei Zaharia | 2013-09-01 | 1 | -1/+0 |
| | | | | | | | | ports and creates noisy log messages | ||||
* | | Initial work to rename package to org.apache.spark | Matei Zaharia | 2013-09-01 | 326 | -936/+941 |
| | | |||||
* | | Merge pull request #883 from alig/master | Matei Zaharia | 2013-09-01 | 1 | -2/+1 |
|\ \ | | | | | | | Don't require the spark home environment variable to be set for standalone mode (change needed by SIMR) | ||||
| * | | Don't require spark home to be set for standalone mode | Ali Ghodsi | 2013-08-31 | 1 | -2/+1 |
| |/ | |||||
* | | Small tweak | Matei Zaharia | 2013-08-31 | 1 | -1/+1 |
| | | |||||
* | | Print output from spark-daemon only when it fails to launch | Matei Zaharia | 2013-08-31 | 1 | -1/+0 |
| | | |||||
* | | Various web UI improvements: | Matei Zaharia | 2013-08-31 | 16 | -164/+971 |
|/ | | | | | | | | | - Use "fluid" layout that can expand to wide browser windows, instead of the old one's limit of 1200 px - Remove unnecessary <hr> elements - Switch back to Bootstrap's default theme and tweak progress bar colors - Make headers more consistent between deploy and app UIs - Replace some inline CSS with stylesheets | ||||
* | Also add getConf to NewHadoopRDD | Mikhail Bautin | 2013-08-30 | 1 | -0/+3 |
| | |||||
* | Make HadoopRDD's configuration accessible | Mikhail Bautin | 2013-08-30 | 1 | -1/+3 |
| | |||||
* | Merge pull request #857 from mateiz/assembly | Matei Zaharia | 2013-08-29 | 4 | -4/+5 |
|\ | | | | | Change build and run instructions to use assemblies | ||||
| * | Update Maven build to create assemblies expected by new scripts | Matei Zaharia | 2013-08-29 | 3 | -28/+0 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This includes the following changes: - The "assembly" package now builds in Maven by default, and creates an assembly containing both hadoop-client and Spark, unlike the old BigTop distribution assembly that skipped hadoop-client - There is now a bigtop-dist package to build the old BigTop assembly - The repl-bin package is no longer built by default since the scripts don't reply on it; instead it can be enabled with -Prepl-bin - Py4J is now included in the assembly/lib folder as a local Maven repo, so that the Maven package can link to it - run-example now adds the original Spark classpath as well because the Maven examples assembly lists spark-core and such as provided - The various Maven projects add a spark-yarn dependency correctly | ||||
| * | Fix finding of assembly JAR, as well as some pointers to ./run | Matei Zaharia | 2013-08-29 | 3 | -2/+3 |
| | |