aboutsummaryrefslogtreecommitdiff
path: root/docs/building-spark.md
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2015-01-09 09:35:46 -0800
committerPatrick Wendell <pwendell@gmail.com>2015-01-09 09:35:46 -0800
commit547df97715580f99ae573a49a86da12bf20cbc3d (patch)
tree6f2ceb3ad12f5545b0c2beaaba9cbaa1e8b50817 /docs/building-spark.md
parentb4034c3f889bf24f60eb806802866b48e4cbe55c (diff)
downloadspark-547df97715580f99ae573a49a86da12bf20cbc3d.tar.gz
spark-547df97715580f99ae573a49a86da12bf20cbc3d.tar.bz2
spark-547df97715580f99ae573a49a86da12bf20cbc3d.zip
SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project
This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in the docs. The intent however is to also update the wiki page with updated tips. This is the text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing instructions on the wiki, about enabling Hive, but I think those are actually optional. ------ IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import Spark as a Maven project. Choose "Import Project..." from the File menu, and select the `pom.xml` file in the Spark root directory. It is fine to leave all settings at their default values in the Maven import wizard, with two caveats. First, it is usually useful to enable "Import Maven projects automatically", sincchanges to the project structure will automatically update the IntelliJ project. Second, note the step that prompts you to choose active Maven build profiles. As documented above, some build configuration require specific profiles to be enabled. The same profiles that are enabled with `-P[profile name]` above may be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN support, enable profiles `yarn` and `hadoop-2.4`. These selections can be changed later by accessing the "Maven Projects" tool window from the View menu, and expanding the Profiles section. "Rebuild Project" can fail the first time the project is compiled, because generate source files are not automatically generated. Try clicking the "Generate Sources and Update Folders For All Projects" button in the "Maven Projects" tool window to manually generate these sources. Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar". If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the "Additional compiler options" field. It will work then although the option will come back when the project reimports. Author: Sean Owen <sowen@cloudera.com> Closes #3952 from srowen/SPARK-5136 and squashes the following commits: f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link 016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including IntelliJ notes in the docs
Diffstat (limited to 'docs/building-spark.md')
-rw-r--r--docs/building-spark.md5
1 files changed, 3 insertions, 2 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index c1bcd91b5b..fb93017861 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -151,9 +151,10 @@ Thus, the full flow for running continuous-compilation of the `core` submodule m
$ mvn scala:cc
```
-# Using With IntelliJ IDEA
+# Building Spark with IntelliJ IDEA or Eclipse
-This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via the pom.xml file in the project root folder, you only need to activate either the hadoop1 or hadoop2 profile in the "Maven Properties" popout. We have not tried Eclipse/Scala IDE with this.
+For help in setting up IntelliJ IDEA or Eclipse for Spark development, and troubleshooting, refer to the
+[wiki page for IDE setup](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-IDESetup).
# Building Spark Debian Packages