summaryrefslogtreecommitdiff
path: root/site/docs/0.9.2/hardware-provisioning.html
blob: 70400c0618cd150f1996507a0c717f4dac980b8e (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
<!DOCTYPE html>
<!--[if lt IE 7]>      <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]>         <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <title>Hardware Provisioning - Spark 0.9.2 Documentation</title>
        <meta name="description" content="">

        <link rel="stylesheet" href="css/bootstrap.min.css">
        <style>
            body {
                padding-top: 60px;
                padding-bottom: 40px;
            }
        </style>
        <meta name="viewport" content="width=device-width">
        <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
        <link rel="stylesheet" href="css/main.css">

        <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>

        <link rel="stylesheet" href="css/pygments-default.css">

        
        <!-- Google analytics script -->
        <script type="text/javascript">
          var _gaq = _gaq || [];
          _gaq.push(['_setAccount', 'UA-32518208-1']);
          _gaq.push(['_trackPageview']);

          (function() {
            var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
            ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
            var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
          })();
        </script>
        

    </head>
    <body>
        <!--[if lt IE 7]>
            <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
        <![endif]-->

        <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->

        <div class="navbar navbar-fixed-top" id="topbar">
            <div class="navbar-inner">
                <div class="container">
                    <div class="brand"><a href="index.html">
                      <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">0.9.2</span>
                    </div>
                    <ul class="nav">
                        <!--TODO(andyk): Add class="active" attribute to li some how.-->
                        <li><a href="index.html">Overview</a></li>

                        <li class="dropdown">
                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
                            <ul class="dropdown-menu">
                                <li><a href="quick-start.html">Quick Start</a></li>
                                <li><a href="scala-programming-guide.html">Spark in Scala</a></li>
                                <li><a href="java-programming-guide.html">Spark in Java</a></li>
                                <li><a href="python-programming-guide.html">Spark in Python</a></li>
                                <li class="divider"></li>
                                <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
                                <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
                                <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
                                <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
                            </ul>
                        </li>

                        <li class="dropdown">
                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
                            <ul class="dropdown-menu">
                                <li><a href="api/core/index.html#org.apache.spark.package">Spark Core for Java/Scala</a></li>
                                <li><a href="api/pyspark/index.html">Spark Core for Python</a></li>
                                <li class="divider"></li>
                                <li><a href="api/streaming/index.html#org.apache.spark.streaming.package">Spark Streaming</a></li>
                                <li><a href="api/mllib/index.html#org.apache.spark.mllib.package">MLlib (Machine Learning)</a></li>
                                <li><a href="api/bagel/index.html#org.apache.spark.bagel.package">Bagel (Pregel on Spark)</a></li>
                                <li><a href="api/graphx/index.html#org.apache.spark.graphx.package">GraphX (Graph Processing)</a></li>
                                <li class="divider"></li>
                                <li class="dropdown-submenu">
                                    <a tabindex="-1" href="#">External Data Sources</a>
                                    <ul class="dropdown-menu">
                                        <li><a href="api/external/kafka/index.html#org.apache.spark.streaming.kafka.KafkaUtils$">Kafka</a></li>
                                        <li><a href="api/external/flume/index.html#org.apache.spark.streaming.flume.FlumeUtils$">Flume</a></li>
                                        <li><a href="api/external/twitter/index.html#org.apache.spark.streaming.twitter.TwitterUtils$">Twitter</a></li>
                                        <li><a href="api/external/zeromq/index.html#org.apache.spark.streaming.zeromq.ZeroMQUtils$">ZeroMQ</a></li>
                                        <li><a href="api/external/mqtt/index.html#org.apache.spark.streaming.mqtt.MQTTUtils$">MQTT</a></li>
                                    </ul>
                                </li>
                            </ul>
                        </li>

                        <li class="dropdown">
                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
                            <ul class="dropdown-menu">
                                <li><a href="cluster-overview.html">Overview</a></li>
                                <li><a href="ec2-scripts.html">Amazon EC2</a></li>
                                <li><a href="spark-standalone.html">Standalone Mode</a></li>
                                <li><a href="running-on-mesos.html">Mesos</a></li>
                                <li><a href="running-on-yarn.html">YARN</a></li>
                            </ul>
                        </li>

                        <li class="dropdown">
                            <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
                            <ul class="dropdown-menu">
                                <li><a href="configuration.html">Configuration</a></li>
                                <li><a href="monitoring.html">Monitoring</a></li>
                                <li><a href="tuning.html">Tuning Guide</a></li>
                                <li><a href="hadoop-third-party-distributions.html">Running with CDH/HDP</a></li>
                                <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
                                <li><a href="job-scheduling.html">Job Scheduling</a></li>
                                <li class="divider"></li>
                                <li><a href="building-with-maven.html">Building Spark with Maven</a></li>
                                <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
                            </ul>
                        </li>
                    </ul>
                    <!--<p class="navbar-text pull-right"><span class="version-text">v0.9.2</span></p>-->
                </div>
            </div>
        </div>

        <div class="container" id="content">
          <h1 class="title">Hardware Provisioning</h1>

          <p>A common question received by Spark developers is how to configure hardware for it. While the right
hardware will depend on the situation, we make the following recommendations.</p>

<h1 id="storage-systems">Storage Systems</h1>

<p>Because most Spark jobs will likely have to read input data from an external storage system (e.g.
the Hadoop File System, or HBase), it is important to place it <strong>as close to this system as
possible</strong>. We recommend the following:</p>

<ul>
  <li>
    <p>If at all possible, run Spark on the same nodes as HDFS. The simplest way is to set up a Spark
<a href="spark-standalone.html">standalone mode cluster</a> on the same nodes, and configure Spark and
Hadoop&#8217;s memory and CPU usage to avoid interference (for Hadoop, the relevant options are
<code>mapred.child.java.opts</code> for the per-task memory and <code>mapred.tasktracker.map.tasks.maximum</code>
and <code>mapred.tasktracker.reduce.tasks.maximum</code> for number of tasks). Alternatively, you can run
Hadoop and Spark on a common cluster manager like <a href="running-on-mesos.html">Mesos</a> or
<a href="running-on-yarn.html">Hadoop YARN</a>.</p>
  </li>
  <li>
    <p>If this is not possible, run Spark on different nodes in the same local-area network as HDFS.</p>
  </li>
  <li>
    <p>For low-latency data stores like HBase, it may be preferrable to run computing jobs on different
nodes than the storage system to avoid interference.</p>
  </li>
</ul>

<h1 id="local-disks">Local Disks</h1>

<p>While Spark can perform a lot of its computation in memory, it still uses local disks to store
data that doesn&#8217;t fit in RAM, as well as to preserve intermediate output between stages. We
recommend having <strong>4-8 disks</strong> per node, configured <em>without</em> RAID (just as separate mount points).
In Linux, mount the disks with the <a href="http://www.centos.org/docs/5/html/Global_File_System/s2-manage-mountnoatime.html"><code>noatime</code> option</a>
to reduce unnecessary writes. In Spark, <a href="configuration.html">configure</a> the <code>spark.local.dir</code>
variable to be a comma-separated list of the local disks. If you are running HDFS, it&#8217;s fine to
use the same disks as HDFS.</p>

<h1 id="memory">Memory</h1>

<p>In general, Spark can run well with anywhere from <strong>8 GB to hundreds of gigabytes</strong> of memory per
machine. In all cases, we recommend allocating only at most 75% of the memory for Spark; leave the
rest for the operating system and buffer cache.</p>

<p>How much memory you will need will depend on your application. To determine how much your
application uses for a certain dataset size, load part of your dataset in a Spark RDD and use the
Storage tab of Spark&#8217;s monitoring UI (<code>http://&lt;driver-node&gt;:4040</code>) to see its size in memory.
Note that memory usage is greatly affected by storage level and serialization format &#8211; see
the <a href="tuning.html">tuning guide</a> for tips on how to reduce it.</p>

<p>Finally, note that the Java VM does not always behave well with more than 200 GB of RAM. If you
purchase machines with more RAM than this, you can run <em>multiple worker JVMs per node</em>. In
Spark&#8217;s <a href="spark-standalone.html">standalone mode</a>, you can set the number of workers per node
with the <code>SPARK_WORKER_INSTANCES</code> variable in <code>conf/spark-env.sh</code>, and the number of cores
per worker with <code>SPARK_WORKER_CORES</code>.</p>

<h1 id="network">Network</h1>

<p>In our experience, when the data is in memory, a lot of Spark applications are network-bound.
Using a <strong>10 Gigabit</strong> or higher network is the best way to make these applications faster.
This is especially true for &#8220;distributed reduce&#8221; applications such as group-bys, reduce-bys, and
SQL joins. In any given application, you can see how much data Spark shuffles across the network
from the application&#8217;s monitoring UI (<code>http://&lt;driver-node&gt;:4040</code>).</p>

<h1 id="cpu-cores">CPU Cores</h1>

<p>Spark scales well to tens of CPU cores per machine because it performes minimal sharing between
threads. You should likely provision at least <strong>8-16 cores</strong> per machine. Depending on the CPU
cost of your workload, you may also need more: once data is in memory, most applications are
either CPU- or network-bound.</p>

            <!-- Main hero unit for a primary marketing message or call to action -->
            <!--<div class="hero-unit">
                <h1>Hello, world!</h1>
                <p>This is a template for a simple marketing or informational website. It includes a large callout called the hero unit and three supporting pieces of content. Use it as a starting point to create something more unique.</p>
                <p><a class="btn btn-primary btn-large">Learn more &raquo;</a></p>
            </div>-->

            <!-- Example row of columns -->
            <!--<div class="row">
                <div class="span4">
                    <h2>Heading</h2>
                    <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p>
                    <p><a class="btn" href="#">View details &raquo;</a></p>
                </div>
                <div class="span4">
                    <h2>Heading</h2>
                    <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p>
                    <p><a class="btn" href="#">View details &raquo;</a></p>
               </div>
                <div class="span4">
                    <h2>Heading</h2>
                    <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p>
                    <p><a class="btn" href="#">View details &raquo;</a></p>
                </div>
            </div>

            <hr>-->

        </div> <!-- /container -->

        <script src="js/vendor/jquery-1.8.0.min.js"></script>
        <script src="js/vendor/bootstrap.min.js"></script>
        <script src="js/main.js"></script>

        <!-- A script to fix internal hash links because we have an overlapping top bar.
             Based on https://github.com/twitter/bootstrap/issues/193#issuecomment-2281510 -->
        <script>
          $(function() {
            function maybeScrollToHash() {
              if (window.location.hash && $(window.location.hash).length) {
                var newTop = $(window.location.hash).offset().top - $('#topbar').height() - 5;
                $(window).scrollTop(newTop);
              }
            }
            $(window).bind('hashchange', function() {
              maybeScrollToHash();
            });
            // Scroll now too in case we had opened the page on a hash, but wait 1 ms because some browsers
            // will try to do *their* initial scroll after running the onReady handler.
            setTimeout(function() { maybeScrollToHash(); }, 1)
          })
        </script>

    </body>
</html>