diff options
author | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-09-06 00:29:37 -0400 |
---|---|---|
committer | Matei Zaharia <matei@eecs.berkeley.edu> | 2013-09-08 00:29:11 -0700 |
commit | 98fb69822cf780160bca51abeaab7c82e49fab54 (patch) | |
tree | 524a1e75519f6a5cc65d004501d6237228db97f2 /docs/contributing-to-spark.md | |
parent | 38488aca8a67d5d2749b82e3fd5f3dc50873d09a (diff) | |
download | spark-98fb69822cf780160bca51abeaab7c82e49fab54.tar.gz spark-98fb69822cf780160bca51abeaab7c82e49fab54.tar.bz2 spark-98fb69822cf780160bca51abeaab7c82e49fab54.zip |
Work in progress:
- Add job scheduling docs
- Rename some fair scheduler properties
- Organize intro page better
- Link to Apache wiki for "contributing to Spark"
Diffstat (limited to 'docs/contributing-to-spark.md')
-rw-r--r-- | docs/contributing-to-spark.md | 24 |
1 files changed, 3 insertions, 21 deletions
diff --git a/docs/contributing-to-spark.md b/docs/contributing-to-spark.md index 50feeb2d6c..ef1b3ad6da 100644 --- a/docs/contributing-to-spark.md +++ b/docs/contributing-to-spark.md @@ -3,24 +3,6 @@ layout: global title: Contributing to Spark --- -The Spark team welcomes contributions in the form of GitHub pull requests. Here are a few tips to get your contribution in: - -- Break your work into small, single-purpose patches if possible. It's much harder to merge in a large change with a lot of disjoint features. -- Submit the patch as a GitHub pull request. For a tutorial, see the GitHub guides on [forking a repo](https://help.github.com/articles/fork-a-repo) and [sending a pull request](https://help.github.com/articles/using-pull-requests). -- Follow the style of the existing codebase. Specifically, we use [standard Scala style guide](http://docs.scala-lang.org/style/), but with the following changes: - * Maximum line length of 100 characters. - * Always import packages using absolute paths (e.g. `scala.collection.Map` instead of `collection.Map`). - * No "infix" syntax for methods other than operators. For example, don't write `table containsKey myKey`; replace it with `table.containsKey(myKey)`. -- Make sure that your code passes the unit tests. You can run the tests with `sbt/sbt test` in the root directory of Spark. - But first, make sure that you have [configured a spark-env.sh](configuration.html) with at least - `SCALA_HOME`, as some of the tests try to spawn subprocesses using this. -- Add new unit tests for your code. We use [ScalaTest](http://www.scalatest.org/) for testing. Just add a new Suite in `core/src/test`, or methods to an existing Suite. -- If you'd like to report a bug but don't have time to fix it, you can still post it to our [issue tracker]({{site.SPARK_ISSUE_TRACKER_URL}}), or email the [mailing list](http://www.spark-project.org/mailing-lists.html). - -# Licensing of Contributions - -Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please -state that the contribution is your original work and that you license the work to the project under the project's open source -license. *Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other -means you agree to license the material under the project's open source license and warrant that you have the legal authority -to do so.* +The Spark team welcomes all forms of contributions, including bug reports, documentation or patches. +For the newest information on how to contribute to the project, please read the +[wiki page on contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark). |