aboutsummaryrefslogtreecommitdiff
path: root/README.md
Commit message (Collapse)AuthorAgeFilesLines
* [Docs] Fix Building Spark link textNicholas Chammas2015-02-021-1/+1
| | | | | | | | | | | Author: Nicholas Chammas <nicholas.chammas@gmail.com> Closes #4312 from nchammas/patch-2 and squashes the following commits: 9d943aa [Nicholas Chammas] [Docs] Fix Building Spark link text (cherry picked from commit 3f941b68a2336aa7876aeda99865e7c19b53bc5c) Signed-off-by: Andrew Or <andrew@databricks.com>
* Fix "Building Spark With Maven" link in README.mdDenny Lee2014-12-251-1/+1
| | | | | | | | | | | | | Corrected link to the Building Spark with Maven page from its original (http://spark.apache.org/docs/latest/building-with-maven.html) to the current page (http://spark.apache.org/docs/latest/building-spark.html) Author: Denny Lee <denny.g.lee@gmail.com> Closes #3802 from dennyglee/patch-1 and squashes the following commits: 15f601a [Denny Lee] Update README.md (cherry picked from commit 08b18c7eb790c65670778eab8a6e32486c5f76e9) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* SPARK-971 [DOCS] Link to Confluence wiki from project website / documentationSean Owen2014-11-091-1/+2
| | | | | | | | | | | | | This is a trivial change to add links to the wiki from `README.md` and the main docs page. It is already linked to from spark.apache.org. Author: Sean Owen <sowen@cloudera.com> Closes #3169 from srowen/SPARK-971 and squashes the following commits: dcb84d0 [Sean Owen] Add link to wiki from README, docs home page (cherry picked from commit 8c99a47a4f0369ff3c1ecaeb860fa61ee789e987) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* fix broken links in README.mdRyan Williams2014-10-271-1/+1
| | | | | | | | | | | | seems like `building-spark.html` was renamed to `building-with-maven.html`? Is Maven the blessed build tool these days, or SBT? I couldn't find a building-with-sbt page so I went with the Maven one here. Author: Ryan Williams <ryan.blake.williams@gmail.com> Closes #2859 from ryan-williams/broken-links-readme and squashes the following commits: 7692253 [Ryan Williams] fix broken links in README.md
* Update Building Spark link.Reynold Xin2014-10-201-1/+1
|
* [Docs] minor punctuation fixNicholas Chammas2014-09-161-1/+1
| | | | | | | | Author: Nicholas Chammas <nicholas.chammas@gmail.com> Closes #2414 from nchammas/patch-1 and squashes the following commits: 14664bf [Nicholas Chammas] [Docs] minor punctuation fix
* SPARK-3069 [DOCS] Build instructions in README are outdatedSean Owen2014-09-161-62/+16
| | | | | | | | | | | | | | | | | | | Here's my crack at Bertrand's suggestion. The Github `README.md` contains build info that's outdated. It should just point to the current online docs, and reflect that Maven is the primary build now. (Incidentally, the stanza at the end about contributions of original work should go in https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark too. It won't hurt to be crystal clear about the agreement to license, given that ICLAs are not required of anyone here.) Author: Sean Owen <sowen@cloudera.com> Closes #2014 from srowen/SPARK-3069 and squashes the following commits: 501507e [Sean Owen] Note that Zinc is for Maven builds too db2bd97 [Sean Owen] sbt -> sbt/sbt and add note about zinc be82027 [Sean Owen] Fix additional occurrences of building-with-maven -> building-spark 91c921f [Sean Owen] Move building-with-maven to building-spark and create a redirect. Update doc links to building-spark.html Add jekyll-redirect-from plugin and make associated config changes (including fixing pygments deprecation). Add example of SBT to README.md 999544e [Sean Owen] Change "Building Spark with Maven" title to "Building Spark"; reinstate tl;dr info about dev/run-tests in README.md; add brief note about building with SBT c18d140 [Sean Owen] Optionally, remove the copy of contributing text from main README.md 8e83934 [Sean Owen] Add CONTRIBUTING.md to trigger notice on new pull request page b1c04a1 [Sean Owen] Refer to current online documentation for building, and remove slightly outdated copy in README.md
* [Docs] fix minor MLlib case typoNicholas Chammas2014-09-041-2/+2
| | | | | | | | | | Also make the list of features consistent in style. Author: Nicholas Chammas <nicholas.chammas@gmail.com> Closes #2278 from nchammas/patch-1 and squashes the following commits: 56df319 [Nicholas Chammas] [Docs] fix minor MLlib case typo
* [Docs] Run tests like in contributing guidenchammas2014-08-261-1/+1
| | | | | | | | | | | | The Contributing to Spark guide [recommends](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting) running tests by calling `./dev/run-tests`. The README should, too. `./sbt/sbt test` does not cover Python tests or style tests. Author: nchammas <nicholas.chammas@gmail.com> Closes #2149 from nchammas/patch-2 and squashes the following commits: 2b3b132 [nchammas] [Docs] Run tests like in contributing guide
* [SPARK-2963] REGRESSION - The description about how to build for using CLI ↵Kousuke Saruta2014-08-221-1/+4
| | | | | | | | | | | | | | | | | | | | | and Thrift JDBC server is absent in proper document - The most important things I mentioned in #1885 is as follows. * People who build Spark is not always programmer. * If a person who build Spark is not a programmer, he/she won't read programmer's guide before building. So, how to build for using CLI and JDBC server is not only in programmer's guide. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #2080 from sarutak/SPARK-2963 and squashes the following commits: ee07c76 [Kousuke Saruta] Modified regression of the description about building for using Thrift JDBC server and CLI ed53329 [Kousuke Saruta] Modified description and notaton of proper noun 07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md 6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963 c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
* Link to Contributing to Spark wiki page on README.md.Reynold Xin2014-08-221-0/+2
|
* SPARK-3092 [SQL]: Always include the thriftserver when -Phive is enabled.Patrick Wendell2014-08-201-5/+1
| | | | | | | | | | | | | Currently we have a separate profile called hive-thriftserver. I originally suggested this in case users did not want to bundle the thriftserver, but it's ultimately lead to a lot of confusion. Since the thriftserver is only a few classes, I don't see a really good reason to isolate it from the rest of Hive. So let's go ahead and just include it in the same profile to simplify things. This has been suggested in the past by liancheng. Author: Patrick Wendell <pwendell@gmail.com> Closes #2006 from pwendell/hiveserver and squashes the following commits: 742ea40 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into hiveserver 034ad47 [Patrick Wendell] SPARK-3092: Always include the thriftserver when -Phive is enabled.
* [SPARK-2963] [SQL] There no documentation about building to use HiveServer ↵Kousuke Saruta2014-08-131-0/+9
| | | | | | | | | | | | | and CLI for SparkSQL Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #1885 from sarutak/SPARK-2963 and squashes the following commits: ed53329 [Kousuke Saruta] Modified description and notaton of proper noun 07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md 6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963 c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
* README update: added "for Big Data".Reynold Xin2014-07-151-1/+1
|
* Update README.md to include a slightly more informative project description.Reynold Xin2014-07-151-1/+8
| | | | | (cherry picked from commit 401083be9f010f95110a819a49837ecae7d9c4ec) Signed-off-by: Reynold Xin <rxin@apache.org>
* [SPARK-2457] Inconsistent description in README about build optionKousuke Saruta2014-07-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | Now, we should use -Pyarn instead of SPARK_YARN when building but README says as follows. For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set `SPARK_YARN=true`: # Apache Hadoop 2.0.5-alpha $ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly # Cloudera CDH 4.2.0 with MapReduce v2 $ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly # Apache Hadoop 2.2.X and newer $ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #1382 from sarutak/SPARK-2457 and squashes the following commits: e7b2d64 [Kousuke Saruta] Replaced "SPARK_YARN=true" with "-Pyarn" in README
* HOTFIX: Minor doc update for sbt changePatrick Wendell2014-07-101-7/+6
|
* [SPARK-1876] Windows fixes to deal with latest distribution layout changesMatei Zaharia2014-05-191-3/+4
| | | | | | | | | | | | | | | | | - Look for JARs in the right place - Launch examples the same way as on Unix - Load datanucleus JARs if they exist - Don't attempt to parse local paths as URIs in SparkSubmit, since paths with C:\ are not valid URIs - Also fixed POM exclusion rules for datanucleus (it wasn't properly excluding it, whereas SBT was) Author: Matei Zaharia <matei@databricks.com> Closes #819 from mateiz/win-fixes and squashes the following commits: d558f96 [Matei Zaharia] Fix comment 228577b [Matei Zaharia] Review comments d3b71c7 [Matei Zaharia] Properly exclude datanucleus files in Maven assembly 144af84 [Matei Zaharia] Update Windows scripts to match latest binary package layout
* SPARK-1565 (Addendum): Replace `run-example` with `spark-submit`.Patrick Wendell2014-05-081-7/+12
| | | | | | | | | | | | | Gives a nicely formatted message to the user when `run-example` is run to tell them to use `spark-submit`. Author: Patrick Wendell <pwendell@gmail.com> Closes #704 from pwendell/examples and squashes the following commits: 1996ee8 [Patrick Wendell] Feedback form Andrew 3eb7803 [Patrick Wendell] Suggestions from TD 2474668 [Patrick Wendell] SPARK-1565 (Addendum): Replace `run-example` with `spark-submit`.
* README updateReynold Xin2014-04-181-11/+24
| | | | | | | | | Author: Reynold Xin <rxin@apache.org> Closes #443 from rxin/readme and squashes the following commits: 16853de [Reynold Xin] Updated SBT and Scala instructions. 3ac3ceb [Reynold Xin] README update
* Removed reference to incubation in README.md.Reynold Xin2014-02-261-14/+3
| | | | | | | | Author: Reynold Xin <rxin@apache.org> Closes #1 from rxin/readme and squashes the following commits: b3a77cd [Reynold Xin] Removed reference to incubation in README.md.
* Update README.mdPrashant Sharma2014-01-081-1/+1
| | | The link does not work otherwise.
* Code review feedbackHolden Karau2014-01-051-6/+8
|
* And update docs to matchHolden Karau2014-01-041-2/+2
|
* Switch from sbt to ./sbt in the README fileHolden Karau2014-01-041-2/+2
|
* Merge remote-tracking branch 'apache-github/master' into remove-binariesPatrick Wendell2014-01-031-4/+4
|\ | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/DriverSuite.scala docs/python-programming-guide.md
| * pyspark -> bin/pysparkPrashant Sharma2014-01-021-1/+1
| |
| * run-example -> bin/run-examplePrashant Sharma2014-01-021-2/+2
| |
| * spark-shell -> bin/spark-shellPrashant Sharma2014-01-021-1/+1
| |
* | Changes on top of Prashant's patch.Patrick Wendell2014-01-031-16/+3
| | | | | | | | Closes #316
* | fixed review commentsPrashant Sharma2014-01-031-8/+11
| |
* | Removed sbt folder and changed docs accordinglyPrashant Sharma2014-01-021-7/+23
|/
* Attempt with extra repositoriesPatrick Wendell2013-12-161-4/+2
|
* Merge branch 'master' into akka-bug-fixPrashant Sharma2013-12-111-3/+5
|\ | | | | | | | | | | | | | | | | | | Conflicts: core/pom.xml core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala pom.xml project/SparkBuild.scala streaming/pom.xml yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
| * README incorrectly suggests build sources spark-env.shPatrick Wendell2013-12-101-3/+0
| | | | | | | | | | | | This is misleading because the build doesn't source that file. IMO it's better to force people to specify build environment variables on the command line always, like we do in every example.
| * Minor doc fixes and updating READMEPatrick Wendell2013-12-061-1/+6
| |
* | Merge branch 'master' into scala-2.10Raymond Liu2013-11-131-1/+1
|\|
| * Fixed a typo in Hadoop version in README.Reynold Xin2013-11-021-1/+1
| |
* | Merge branch 'master' into scala-2.10Prashant Sharma2013-10-011-0/+1
|\| | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/ui/jobs/JobProgressUI.scala docs/_config.yml project/SparkBuild.scala repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
| * Merge remote-tracking branch 'old/master'Matei Zaharia2013-09-021-0/+11
| |\
| * \ Merge remote-tracking branch 'old/master'Matei Zaharia2013-09-011-25/+52
| |\ \
| * \ \ Merge remote-tracking branch 'old/master'Matei Zaharia2013-07-161-1/+1
| |\ \ \
| * | | | Test commit karma for Spark git.Henry Saputra2013-07-151-0/+1
| | | | |
* | | | | Merged with masterPrashant Sharma2013-09-061-24/+62
|\ \ \ \ \ | | |_|_|/ | |/| | |
| * | | | Add Apache incubator notice to READMEMatei Zaharia2013-09-021-0/+11
| | |_|/ | |/| |
| * | | Initial work to rename package to org.apache.sparkMatei Zaharia2013-09-011-1/+1
| | | |
| * | | Small fixes to READMEMatei Zaharia2013-08-311-26/+16
| | | |
| * | | Update some build instructions because only sbt assembly and mvn packageMatei Zaharia2013-08-291-5/+5
| | | | | | | | | | | | | | | | are now needed
| * | | Change build and run instructions to use assembliesMatei Zaharia2013-08-291-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit makes Spark invocation saner by using an assembly JAR to find all of Spark's dependencies instead of adding all the JARs in lib_managed. It also packages the examples into an assembly and uses that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script with two better-named scripts: "run-examples" for examples, and "spark-class" for Spark internal classes (e.g. REPL, master, etc). This is also designed to minimize the confusion people have in trying to use "run" to run their own classes; it's not meant to do that, but now at least if they look at it, they can modify run-examples to do a decent job for them. As part of this, Bagel's examples are also now properly moved to the examples package instead of bagel.
| * | | Use Hadoop 1.2.1 in application exampleJey Kottalam2013-08-211-5/+4
| | | |