diff options
Diffstat (limited to 'site/docs/1.0.0/hadoop-third-party-distributions.html')
-rw-r--r-- | site/docs/1.0.0/hadoop-third-party-distributions.html | 256 |
1 files changed, 256 insertions, 0 deletions
diff --git a/site/docs/1.0.0/hadoop-third-party-distributions.html b/site/docs/1.0.0/hadoop-third-party-distributions.html new file mode 100644 index 000000000..e62b74a2d --- /dev/null +++ b/site/docs/1.0.0/hadoop-third-party-distributions.html @@ -0,0 +1,256 @@ +<!DOCTYPE html> +<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> +<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> +<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> +<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> + <head> + <meta charset="utf-8"> + <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> + <title>Third-Party Hadoop Distributions - Spark 1.0.0 Documentation</title> + <meta name="description" content=""> + + + + <link rel="stylesheet" href="css/bootstrap.min.css"> + <style> + body { + padding-top: 60px; + padding-bottom: 40px; + } + </style> + <meta name="viewport" content="width=device-width"> + <link rel="stylesheet" href="css/bootstrap-responsive.min.css"> + <link rel="stylesheet" href="css/main.css"> + + <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script> + + <link rel="stylesheet" href="css/pygments-default.css"> + + + + </head> + <body> + <!--[if lt IE 7]> + <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p> + <![endif]--> + + <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html --> + + <div class="navbar navbar-fixed-top" id="topbar"> + <div class="navbar-inner"> + <div class="container"> + <div class="brand"><a href="index.html"> + <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.0.0</span> + </div> + <ul class="nav"> + <!--TODO(andyk): Add class="active" attribute to li some how.--> + <li><a href="index.html">Overview</a></li> + + <li class="dropdown"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a> + <ul class="dropdown-menu"> + <li><a href="quick-start.html">Quick Start</a></li> + <li><a href="programming-guide.html">Spark Programming Guide</a></li> + <li class="divider"></li> + <li><a href="streaming-programming-guide.html">Spark Streaming</a></li> + <li><a href="sql-programming-guide.html">Spark SQL</a></li> + <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li> + <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li> + <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li> + </ul> + </li> + + <li class="dropdown"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a> + <ul class="dropdown-menu"> + <li><a href="api/scala/index.html#org.apache.spark.package">Scaladoc</a></li> + <li><a href="api/java/index.html">Javadoc</a></li> + <li><a href="api/python/index.html">Python API</a></li> + </ul> + </li> + + <li class="dropdown"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a> + <ul class="dropdown-menu"> + <li><a href="cluster-overview.html">Overview</a></li> + <li><a href="submitting-applications.html">Submitting Applications</a></li> + <li class="divider"></li> + <li><a href="ec2-scripts.html">Amazon EC2</a></li> + <li><a href="spark-standalone.html">Standalone Mode</a></li> + <li><a href="running-on-mesos.html">Mesos</a></li> + <li><a href="running-on-yarn.html">YARN</a></li> + </ul> + </li> + + <li class="dropdown"> + <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a> + <ul class="dropdown-menu"> + <li><a href="configuration.html">Configuration</a></li> + <li><a href="monitoring.html">Monitoring</a></li> + <li><a href="tuning.html">Tuning Guide</a></li> + <li><a href="job-scheduling.html">Job Scheduling</a></li> + <li><a href="security.html">Security</a></li> + <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li> + <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li> + <li class="divider"></li> + <li><a href="building-with-maven.html">Building Spark with Maven</a></li> + <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li> + </ul> + </li> + </ul> + <!--<p class="navbar-text pull-right"><span class="version-text">v1.0.0</span></p>--> + </div> + </div> + </div> + + <div class="container" id="content"> + + <h1 class="title">Third-Party Hadoop Distributions</h1> + + + <p>Spark can run against all versions of Cloudera’s Distribution Including Apache Hadoop (CDH) and +the Hortonworks Data Platform (HDP). There are a few things to keep in mind when using Spark +with these distributions:</p> + +<h1 id="compile-time-hadoop-version">Compile-time Hadoop Version</h1> + +<p>When compiling Spark, you’ll need to specify the Hadoop version by defining the <code>hadoop.version</code> +property. For certain versions, you will need to specify additional profiles. For more detail, +see the guide on <a href="building-with-maven.html#specifying-the-hadoop-version">building with maven</a>:</p> + +<pre><code>mvn -Dhadoop.version=1.0.4 -DskipTests clean package +mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package +</code></pre> + +<p>The table below lists the corresponding <code>hadoop.version</code> code for each CDH/HDP release. Note that +some Hadoop releases are binary compatible across client versions. This means the pre-built Spark +distribution may “just work” without you needing to compile. That said, we recommend compiling with +the <em>exact</em> Hadoop version you are running to avoid any compatibility errors.</p> + +<table> + <tr valign="top"> + <td> + <h3>CDH Releases</h3> + <table class="table" style="width:350px; margin-right: 20px;"> + <tr><th>Release</th><th>Version code</th></tr> + <tr><td>CDH 4.X.X (YARN mode)</td><td>2.0.0-cdh4.X.X</td></tr> + <tr><td>CDH 4.X.X</td><td>2.0.0-mr1-cdh4.X.X</td></tr> + <tr><td>CDH 3u6</td><td>0.20.2-cdh3u6</td></tr> + <tr><td>CDH 3u5</td><td>0.20.2-cdh3u5</td></tr> + <tr><td>CDH 3u4</td><td>0.20.2-cdh3u4</td></tr> + </table> + </td> + <td> + <h3>HDP Releases</h3> + <table class="table" style="width:350px;"> + <tr><th>Release</th><th>Version code</th></tr> + <tr><td>HDP 1.3</td><td>1.2.0</td></tr> + <tr><td>HDP 1.2</td><td>1.1.2</td></tr> + <tr><td>HDP 1.1</td><td>1.0.3</td></tr> + <tr><td>HDP 1.0</td><td>1.0.3</td></tr> + <tr><td>HDP 2.0</td><td>2.2.0</td></tr> + </table> + </td> + </tr> +</table> + +<p>In SBT, the equivalent can be achieved by setting the SPARK_HADOOP_VERSION flag:</p> + +<pre><code>SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly +</code></pre> + +<h1 id="linking-applications-to-the-hadoop-version">Linking Applications to the Hadoop Version</h1> + +<p>In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that +version of <code>hadoop-client</code> to any Spark applications you run, so they can also talk to the HDFS version +on the cluster. If you are using CDH, you also need to add the Cloudera Maven repository. +This looks as follows in SBT:</p> + +<div class="highlight"><pre><code class="scala"><span class="n">libraryDependencies</span> <span class="o">+=</span> <span class="s">"org.apache.hadoop"</span> <span class="o">%</span> <span class="s">"hadoop-client"</span> <span class="o">%</span> <span class="s">"<version>"</span> + +<span class="c1">// If using CDH, also add Cloudera repo</span> +<span class="n">resolvers</span> <span class="o">+=</span> <span class="s">"Cloudera Repository"</span> <span class="n">at</span> <span class="s">"https://repository.cloudera.com/artifactory/cloudera-repos/"</span> +</code></pre></div> + +<p>Or in Maven:</p> + +<div class="highlight"><pre><code class="xml"><span class="nt"><project></span> + <span class="nt"><dependencies></span> + ... + <span class="nt"><dependency></span> + <span class="nt"><groupId></span>org.apache.hadoop<span class="nt"></groupId></span> + <span class="nt"><artifactId></span>hadoop-client<span class="nt"></artifactId></span> + <span class="nt"><version></span>[version]<span class="nt"></version></span> + <span class="nt"></dependency></span> + <span class="nt"></dependencies></span> + + <span class="c"><!-- If using CDH, also add Cloudera repo --></span> + <span class="nt"><repositories></span> + ... + <span class="nt"><repository></span> + <span class="nt"><id></span>Cloudera repository<span class="nt"></id></span> + <span class="nt"><url></span>https://repository.cloudera.com/artifactory/cloudera-repos/<span class="nt"></url></span> + <span class="nt"></repository></span> + <span class="nt"></repositories></span> +<span class="nt"></project></span> +</code></pre></div> + +<h1 id="where-to-run-spark">Where to Run Spark</h1> + +<p>As described in the <a href="hardware-provisioning.html#storage-systems">Hardware Provisioning</a> guide, +Spark can run in a variety of deployment modes:</p> + +<ul> + <li>Using dedicated set of Spark nodes in your cluster. These nodes should be co-located with your +Hadoop installation.</li> + <li>Running on the same nodes as an existing Hadoop installation, with a fixed amount memory and +cores dedicated to Spark on each node.</li> + <li>Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.</li> +</ul> + +<p>These options are identical for those using CDH and HDP. </p> + +<h1 id="inheriting-cluster-configuration">Inheriting Cluster Configuration</h1> + +<p>If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that +should be included on Spark’s classpath:</p> + +<ul> + <li><code>hdfs-site.xml</code>, which provides default behaviors for the HDFS client.</li> + <li><code>core-site.xml</code>, which sets the default filesystem name.</li> +</ul> + +<p>The location of these configuration files varies across CDH and HDP versions, but +a common location is inside of <code>/etc/hadoop/conf</code>. Some tools, such as Cloudera Manager, create +configurations on-the-fly, but offer a mechanisms to download copies of them.</p> + +<p>To make these files visible to Spark, set <code>HADOOP_CONF_DIR</code> in <code>$SPARK_HOME/spark-env.sh</code> +to a location containing the configuration files.</p> + + + </div> <!-- /container --> + + <script src="js/vendor/jquery-1.8.0.min.js"></script> + <script src="js/vendor/bootstrap.min.js"></script> + <script src="js/main.js"></script> + + <!-- MathJax Section --> + <script type="text/x-mathjax-config"> + MathJax.Hub.Config({ + TeX: { equationNumbers: { autoNumber: "AMS" } } + }); + </script> + <script type="text/javascript" + src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> + <script> + MathJax.Hub.Config({ + tex2jax: { + inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ], + displayMath: [ ["$$","$$"], ["\\[", "\\]"] ], + processEscapes: true, + skipTags: ['script', 'noscript', 'style', 'textarea', 'pre'] + } + }); + </script> + </body> +</html> |