<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>
Examples | Apache Spark
</title>
<!-- Bootstrap core CSS -->
<link href="/css/cerulean.min.css" rel="stylesheet">
<link href="/css/custom.css" rel="stylesheet">
<script type="text/javascript">
<!-- Google Analytics initialization -->
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-32518208-2']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
<!-- Adds slight delay to links to allow async reporting -->
function trackOutboundLink(link, category, action) {
try {
_gaq.push(['_trackEvent', category , action]);
} catch(err){}
setTimeout(function() {
document.location.href = link.href;
}, 100);
}
</script>
<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"></script>
<![endif]-->
</head>
<body>
<script src="https://code.jquery.com/jquery.js"></script>
<script src="//netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js"></script>
<script src="/js/lang-tabs.js"></script>
<script src="/js/downloads.js"></script>
<div class="container" style="max-width: 1200px;">
<div class="masthead">
<p class="lead">
<a href="/">
<img src="/images/spark-logo.png"
style="height:100px; width:auto; vertical-align: bottom; margin-top: 20px;"></a><span class="tagline">
Lightning-fast cluster computing
</span>
</p>
</div>
<nav class="navbar navbar-default" role="navigation">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse"
data-target="#navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="navbar-collapse-1">
<ul class="nav navbar-nav">
<li><a href="/downloads.html">Download</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">
Libraries <b class="caret"></b>
</a>
<ul class="dropdown-menu">
<li><a href="/sql/">Spark SQL</a></li>
<li><a href="/streaming/">Spark Streaming</a></li>
<li><a href="/mllib/">MLlib (machine learning)</a></li>
<li><a href="/graphx/">GraphX (graph)</a></li>
<li class="divider"></li>
<li><a href="http://spark-packages.org">Third-Party Packages</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">
Documentation <b class="caret"></b>
</a>
<ul class="dropdown-menu">
<li><a href="/documentation.html">Overview</a></li>
<li><a href="/docs/latest/">Latest Release (Spark 1.4.0)</a></li>
</ul>
</li>
<li><a href="/examples.html">Examples</a></li>
<li class="dropdown">
<a href="/community.html" class="dropdown-toggle" data-toggle="dropdown">
Community <b class="caret"></b>
</a>
<ul class="dropdown-menu">
<li><a href="/community.html">Mailing Lists</a></li>
<li><a href="/community.html#events">Events and Meetups</a></li>
<li><a href="/community.html#history">Project History</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark">Powered By</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/SPARK/Committers">Project Committers</a></li>
<li><a href="https://issues.apache.org/jira/browse/SPARK">Issue Tracker</a></li>
</ul>
</li>
<li><a href="/faq.html">FAQ</a></li>
</ul>
</div>
<!-- /.navbar-collapse -->
</nav>
<div class="row">
<div class="col-md-3 col-md-push-9">
<div class="news" style="margin-bottom: 20px;">
<h5>Latest News</h5>
<ul class="list-unstyled">
<li><a href="/news/spark-1-4-0-released.html">Spark 1.4.0 released</a>
<span class="small">(Jun 11, 2015)</span></li>
<li><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a>
<span class="small">(May 15, 2015)</span></li>
<li><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a>
<span class="small">(May 15, 2015)</span></li>
<li><a href="/news/spark-summit-east-2015-videos-posted.html">Spark Summit East 2015 Videos Posted</a>
<span class="small">(Apr 20, 2015)</span></li>
</ul>
<p class="small" style="text-align: right;"><a href="/news/index.html">Archive</a></p>
</div>
<div class="hidden-xs hidden-sm">
<a href="/downloads.html" class="btn btn-success btn-lg btn-block" style="margin-bottom: 30px;">
Download Spark
</a>
<p style="font-size: 16px; font-weight: 500; color: #555;">
Built-in Libraries:
</p>
<ul class="list-none">
<li><a href="/sql/">Spark SQL</a></li>
<li><a href="/streaming/">Spark Streaming</a></li>
<li><a href="/mllib/">MLlib (machine learning)</a></li>
<li><a href="/graphx/">GraphX (graph)</a></li>
</ul>
<a href="http://spark-packages.org">Third-Party Packages</a>
</div>
</div>
<div class="col-md-9 col-md-pull-3">
<h2>Spark Examples</h2>
<p>These examples give a quick overview of the Spark API.
Spark is built on the concept of <em>distributed datasets</em>, which contain arbitrary Java or
Python objects. You create a dataset from external data, then apply parallel operations
to it. There are two types of operations: <em>transformations</em>, which define a new dataset based on
previous ones, and <em>actions</em>, which kick off a job to execute on a cluster.</p>
<h3>Text Search</h3>
<p>In this example, we search through the error messages in a log file:</p>
<ul class="nav nav-tabs">
<li class="lang-tab lang-tab-python active"><a href="#">Python</a></li>
<li class="lang-tab lang-tab-scala"><a href="#">Scala</a></li>
<li class="lang-tab lang-tab-java"><a href="#">Java</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane tab-pane-python active">
<div class="code code-tab">
text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
errors = text_file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br />
<span class="comment"># Count all the errors</span><br />
errors.<span class="sparkop">count</span>()<br />
<span class="comment"># Count errors mentioning MySQL</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">lambda line: "MySQL" in line</span>).<span class="sparkop">count</span>()<br />
<span class="comment"># Fetch the MySQL errors as an array of strings</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">lambda line: "MySQL" in line</span>).<span class="sparkop">collect</span>()<br />
</div>
</div>
<div class="tab-pane tab-pane-scala">
<div class="code code-tab">
<span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
<span class="keyword">val</span> errors = textFile.<span class="sparkop">filter</span>(<span class="closure">line => line.contains("ERROR")</span>)<br />
<span class="comment">// Count all the errors</span><br />
errors.<span class="sparkop">count</span>()<br />
<span class="comment">// Count errors mentioning MySQL</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">line => line.contains("MySQL")</span>).<span class="sparkop">count</span>()<br />
<span class="comment">// Fetch the MySQL errors as an array of strings</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">line => line.contains("MySQL")</span>).<span class="sparkop">collect</span>()<br />
</div>
</div>
<div class="tab-pane tab-pane-java">
<div class="code code-tab">
JavaRDD<String> textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
JavaRDD<String> errors = textFile.<span class="sparkop">filter</span>(<span class="closure">new Function<String, Boolean>() {<br />
public Boolean call(String s) { return s.contains("ERROR"); }<br />
}</span>);<br />
<span class="comment">// Count all the errors</span><br />
errors.<span class="sparkop">count</span>();<br />
<span class="comment">// Count errors mentioning MySQL</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">new Function<String, Boolean>() {<br />
public Boolean call(String s) { return s.contains("MySQL"); }<br />
}</span>).<span class="sparkop">count</span>();<br />
<span class="comment">// Fetch the MySQL errors as an array of strings</span><br />
errors.<span class="sparkop">filter</span>(<span class="closure">new Function<String, Boolean>() {<br />
public Boolean call(String s) { return s.contains("MySQL"); }<br />
}</span>).<span class="sparkop">collect</span>();<br />
</div>
</div>
</div>
<p>The red code fragments are function literals (closures) that get passed automatically to the cluster. The blue ones are Spark operations.</p>
<h3>In-Memory Text Search</h3>
<p>Spark can <em>cache</em> datasets in memory to speed up reuse. In the example above, we can load just the error messages in RAM using:</p>
<ul class="nav nav-tabs">
<li class="lang-tab lang-tab-python active"><a href="#">Python</a></li>
<li class="lang-tab lang-tab-scala"><a href="#">Scala</a></li>
<li class="lang-tab lang-tab-java"><a href="#">Java</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane tab-pane-python active">
<div class="code code-tab">
errors.<span class="sparkop">cache</span>()
</div>
</div>
<div class="tab-pane tab-pane-scala">
<div class="code code-tab">
errors.<span class="sparkop">cache</span>()
</div>
</div>
<div class="tab-pane tab-pane-java">
<div class="code code-tab">
errors.<span class="sparkop">cache</span>();
</div>
</div>
</div>
<p>After the first action that uses <code>errors</code>, later ones will be much faster.</p>
<h3>Word Count</h3>
<p>In this example, we use a few more transformations to build a dataset of (String, Int) pairs called <code>counts</code> and then save it to a file.</p>
<ul class="nav nav-tabs">
<li class="lang-tab lang-tab-python active"><a href="#">Python</a></li>
<li class="lang-tab lang-tab-scala"><a href="#">Scala</a></li>
<li class="lang-tab lang-tab-java"><a href="#">Java</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane tab-pane-python active">
<div class="code code-tab">
text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
counts = text_file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split(" ")</span>) \<br />
.<span class="sparkop">map</span>(<span class="closure">lambda word: (word, 1)</span>) \<br />
.<span class="sparkop">reduceByKey</span>(<span class="closure">lambda a, b: a + b</span>)<br />
counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
</div>
</div>
<div class="tab-pane tab-pane-scala">
<div class="code code-tab">
<span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
<span class="keyword">val</span> counts = textFile.<span class="sparkop">flatMap</span>(<span class="closure">line => line.split(" ")</span>)<br />
.<span class="sparkop">map</span>(<span class="closure">word => (word, 1)</span>)<br />
.<span class="sparkop">reduceByKey</span>(<span class="closure">_ + _</span>)<br />
counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
</div>
</div>
<div class="tab-pane tab-pane-java">
<div class="code code-tab">
JavaRDD<String> textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
JavaRDD<String> words = textFile.<span class="sparkop">flatMap</span>(<span class="closure">new FlatMapFunction<String, String>() {<br />
public Iterable<String> call(String s) { return Arrays.asList(s.split(" ")); }<br />
}</span>);<br />
JavaPairRDD<String, Integer> pairs = words.<span class="sparkop">mapToPair</span>(<span class="closure">new PairFunction<String, String, Integer>() {<br />
public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); }<br />
}</span>);<br />
JavaPairRDD<String, Integer> counts = pairs.<span class="sparkop">reduceByKey</span>(<span class="closure">new Function2<Integer, Integer, Integer>() {<br />
public Integer call(Integer a, Integer b) { return a + b; }<br />
}</span>);<br />
counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>);
</div>
</div>
</div>
<h3>Estimating Pi</h3>
<p>Spark can also be used for compute-intensive tasks. This code estimates <span style="font-family: serif; font-size: 120%;">π</span> by "throwing darts" at a circle. We pick random points in the unit square ((0, 0) to (1,1)) and see how many fall in the unit circle. The fraction should be <span style="font-family: serif; font-size: 120%;">π / 4</span>, so we use this to get our estimate.</p>
<ul class="nav nav-tabs">
<li class="lang-tab lang-tab-python active"><a href="#">Python</a></li>
<li class="lang-tab lang-tab-scala"><a href="#">Scala</a></li>
<li class="lang-tab lang-tab-java"><a href="#">Java</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane tab-pane-python active">
<div class="code code-tab">
<span class="keyword">def</span> sample(p):<br />
x, y = random(), random()<br />
<span class="keyword">return</span> 1 <span class="keyword">if</span> x*x + y*y < 1 <span class="keyword">else</span> 0<br /><br />
count = spark.parallelize(xrange(0, NUM_SAMPLES)).<span class="sparkop">map</span>(<span class="closure">sample</span>) \<br />
.<span class="sparkop">reduce</span>(<span class="closure">lambda a, b: a + b</span>)<br />
print <span class="string">"Pi is roughly %f"</span> % (4.0 * count / NUM_SAMPLES)<br />
</div>
</div>
<div class="tab-pane tab-pane-scala">
<div class="code code-tab">
<span class="keyword">val</span> count = spark.parallelize(1 to NUM_SAMPLES).<span class="sparkop">map</span>{<span class="closure">i =><br />
val x = Math.random()<br />
val y = Math.random()<br />
if (x*x + y*y < 1) 1 else 0<br />
</span>}.<span class="sparkop">reduce</span>(<span class="closure">_ + _</span>)<br />
println(<span class="string">"Pi is roughly "</span> + 4.0 * count / NUM_SAMPLES)<br />
</div>
</div>
<div class="tab-pane tab-pane-java">
<div class="code code-tab">
<span class="keyword">int</span> count = spark.parallelize(makeRange(1, NUM_SAMPLES)).<span class="sparkop">filter</span>(<span class="closure">new Function<Integer, Boolean>() {<br />
public Boolean call(Integer i) {<br />
double x = Math.random();<br />
double y = Math.random();<br />
return x*x + y*y < 1;<br />
}<br />
}</span>).<span class="sparkop">count</span>();<br />
System.out.println(<span class="string">"Pi is roughly "</span> + 4 * count / NUM_SAMPLES);<br />
</div>
</div>
</div>
<h3>Logistic Regression</h3>
<p>This is an iterative machine learning algorithm that seeks to find the best hyperplane that separates two sets of points in a multi-dimensional feature space. It can be used to classify messages into spam vs non-spam, for example. Because the algorithm applies the same MapReduce operation repeatedly to the same dataset, it benefits greatly from caching the input in RAM across iterations.</p>
<ul class="nav nav-tabs">
<li class="lang-tab lang-tab-python active"><a href="#">Python</a></li>
<li class="lang-tab lang-tab-scala"><a href="#">Scala</a></li>
<li class="lang-tab lang-tab-java"><a href="#">Java</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane tab-pane-python active">
<div class="code code-tab">
points = spark.textFile(...).<span class="sparkop">map</span>(parsePoint).<span class="sparkop">cache</span>()<br />
w = numpy.random.ranf(size = D) <span class="comment"># current separating plane</span><br />
<span class="keyword">for</span> i <span class="keyword">in</span> range(ITERATIONS):<br />
gradient = points.<span class="sparkop">map</span>(<span class="closure"><br />
lambda p: (1 / (1 + exp(-p.y*(w.dot(p.x)))) - 1) * p.y * p.x<br />
</span>).<span class="sparkop">reduce</span>(<span class="closure">lambda a, b: a + b</span>)<br />
w -= gradient<br />
print <span class="string">"Final separating plane: %s"</span> % w<br />
</div>
</div>
<div class="tab-pane tab-pane-scala">
<div class="code code-tab">
<span class="keyword">val</span> points = spark.textFile(...).<span class="sparkop">map</span>(parsePoint).<span class="sparkop">cache</span>()<br />
<span class="keyword">var</span> w = Vector.random(D) <span class="comment">// current separating plane</span><br />
<span class="keyword">for</span> (i <- 1 to ITERATIONS) {<br />
<span class="keyword">val</span> gradient = points.<span class="sparkop">map</span>(<span class="closure">p =><br />
(1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x<br />
</span>).<span class="sparkop">reduce</span>(<span class="closure">_ + _</span>)<br />
w -= gradient<br />
}<br />
println(<span class="string">"Final separating plane: "</span> + w)<br />
</div>
</div>
<div class="tab-pane tab-pane-java">
<div class="code code-tab">
<span class="keyword">class</span> ComputeGradient <span class="keyword">extends</span> Function<DataPoint, Vector> {<br />
<span class="keyword">private</span> Vector w;<br />
ComputeGradient(Vector w) { <span class="keyword">this</span>.w = w; }<br />
<span class="keyword">public</span> Vector call(DataPoint p) {<br />
<span class="keyword">return</span> p.x.times(p.y * (1 / (1 + Math.exp(w.dot(p.x))) - 1));<br />
}<br />
}<br />
<br />
JavaRDD<DataPoint> points = spark.textFile(...).<span class="sparkop">map</span>(<span class="closure">new ParsePoint()</span>).<span class="sparkop">cache</span>();<br />
Vector w = Vector.random(D); <span class="comment">// current separating plane</span><br />
<span class="keyword">for</span> (<span class="keyword">int</span> i = 0; i < ITERATIONS; i++) {<br />
Vector gradient = points.<span class="sparkop">map</span>(<span class="closure">new ComputeGradient(w)</span>).<span class="sparkop">reduce</span>(<span class="closure">new AddVectors()</span>);<br />
w = w.subtract(gradient);<br />
}<br />
System.out.println(<span class="string">"Final separating plane: "</span> + w);<br />
</div>
</div>
</div>
<p>Note that the current separating plane, <code>w</code>, gets shipped automatically to the cluster with every <code>map</code> call.</p>
<p>The graph below compares the running time per iteration of this Spark program against a Hadoop implementation on 100 GB of data on a 100-node cluster, showing the benefit of in-memory caching:</p>
<p style="margin-top: 20px; margin-bottom: 30px;">
<img src="/images/logistic-regression.png" alt="Logistic regression performance in Spark vs Hadoop" />
</p>
<p><a name="additional"></a></p>
<h2>Additional Examples</h2>
<p>Many additional examples are distributed with Spark:</p>
<ul>
<li>Basic Spark: <a href="https://github.com/apache/spark/tree/master/examples/src/main/scala/org/apache/spark/examples">Scala examples</a>, <a href="https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples">Java examples</a>, <a href="https://github.com/apache/spark/tree/master/examples/src/main/python">Python examples</a></li>
<li>Spark Streaming: <a href="https://github.com/apache/spark/tree/master/examples/src/main/scala/org/apache/spark/examples/streaming">Scala examples</a>, <a href="https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples/streaming">Java examples</a></li>
</ul>
</div>
</div>
<footer class="small">
<hr>
Apache Spark, Spark, Apache, and the Spark logo are trademarks of
<a href="http://www.apache.org">The Apache Software Foundation</a>.
</footer>
</div>
</body>
</html>