summaryrefslogtreecommitdiff
path: root/site/docs/1.5.0/mllib-clustering.html
diff options
context:
space:
mode:
authorReynold Xin <rxin@apache.org>2015-09-17 22:07:42 +0000
committerReynold Xin <rxin@apache.org>2015-09-17 22:07:42 +0000
commitee9ffe89d608e7640a2487406b618d27e58026d6 (patch)
tree50ec819abb41a9a769d7f64eed1f0ab2084aa6ff /site/docs/1.5.0/mllib-clustering.html
parentc7104724b279f09486ea62f4a24252e8d06f5c96 (diff)
downloadspark-website-ee9ffe89d608e7640a2487406b618d27e58026d6.tar.gz
spark-website-ee9ffe89d608e7640a2487406b618d27e58026d6.tar.bz2
spark-website-ee9ffe89d608e7640a2487406b618d27e58026d6.zip
delete 1.5.0
Diffstat (limited to 'site/docs/1.5.0/mllib-clustering.html')
-rw-r--r--site/docs/1.5.0/mllib-clustering.html976
1 files changed, 0 insertions, 976 deletions
diff --git a/site/docs/1.5.0/mllib-clustering.html b/site/docs/1.5.0/mllib-clustering.html
deleted file mode 100644
index 3fcbff84d..000000000
--- a/site/docs/1.5.0/mllib-clustering.html
+++ /dev/null
@@ -1,976 +0,0 @@
-<!DOCTYPE html>
-<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
-<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
-<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
-<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
- <head>
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
- <title>Clustering - MLlib - Spark 1.5.0 Documentation</title>
-
-
-
-
- <link rel="stylesheet" href="css/bootstrap.min.css">
- <style>
- body {
- padding-top: 60px;
- padding-bottom: 40px;
- }
- </style>
- <meta name="viewport" content="width=device-width">
- <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
- <link rel="stylesheet" href="css/main.css">
-
- <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
-
- <link rel="stylesheet" href="css/pygments-default.css">
-
-
- <!-- Google analytics script -->
- <script type="text/javascript">
- var _gaq = _gaq || [];
- _gaq.push(['_setAccount', 'UA-32518208-2']);
- _gaq.push(['_trackPageview']);
-
- (function() {
- var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
- ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
- var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
- })();
- </script>
-
-
- </head>
- <body>
- <!--[if lt IE 7]>
- <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
- <![endif]-->
-
- <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
-
- <div class="navbar navbar-fixed-top" id="topbar">
- <div class="navbar-inner">
- <div class="container">
- <div class="brand"><a href="index.html">
- <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.5.0</span>
- </div>
- <ul class="nav">
- <!--TODO(andyk): Add class="active" attribute to li some how.-->
- <li><a href="index.html">Overview</a></li>
-
- <li class="dropdown">
- <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
- <ul class="dropdown-menu">
- <li><a href="quick-start.html">Quick Start</a></li>
- <li><a href="programming-guide.html">Spark Programming Guide</a></li>
- <li class="divider"></li>
- <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
- <li><a href="sql-programming-guide.html">DataFrames and SQL</a></li>
- <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
- <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
- <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
- <li><a href="sparkr.html">SparkR (R on Spark)</a></li>
- </ul>
- </li>
-
- <li class="dropdown">
- <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
- <ul class="dropdown-menu">
- <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
- <li><a href="api/java/index.html">Java</a></li>
- <li><a href="api/python/index.html">Python</a></li>
- <li><a href="api/R/index.html">R</a></li>
- </ul>
- </li>
-
- <li class="dropdown">
- <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
- <ul class="dropdown-menu">
- <li><a href="cluster-overview.html">Overview</a></li>
- <li><a href="submitting-applications.html">Submitting Applications</a></li>
- <li class="divider"></li>
- <li><a href="spark-standalone.html">Spark Standalone</a></li>
- <li><a href="running-on-mesos.html">Mesos</a></li>
- <li><a href="running-on-yarn.html">YARN</a></li>
- <li class="divider"></li>
- <li><a href="ec2-scripts.html">Amazon EC2</a></li>
- </ul>
- </li>
-
- <li class="dropdown">
- <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
- <ul class="dropdown-menu">
- <li><a href="configuration.html">Configuration</a></li>
- <li><a href="monitoring.html">Monitoring</a></li>
- <li><a href="tuning.html">Tuning Guide</a></li>
- <li><a href="job-scheduling.html">Job Scheduling</a></li>
- <li><a href="security.html">Security</a></li>
- <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
- <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li>
- <li class="divider"></li>
- <li><a href="building-spark.html">Building Spark</a></li>
- <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
- <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Supplemental+Spark+Projects">Supplemental Projects</a></li>
- </ul>
- </li>
- </ul>
- <!--<p class="navbar-text pull-right"><span class="version-text">v1.5.0</span></p>-->
- </div>
- </div>
- </div>
-
- <div class="container" id="content">
-
- <h1 class="title"><a href="mllib-guide.html">MLlib</a> - Clustering</h1>
-
-
- <p>Clustering is an unsupervised learning problem whereby we aim to group subsets
-of entities with one another based on some notion of similarity. Clustering is
-often used for exploratory analysis and/or as a component of a hierarchical
-supervised learning pipeline (in which distinct classifiers or regression
-models are trained for each cluster).</p>
-
-<p>MLlib supports the following models:</p>
-
-<ul id="markdown-toc">
- <li><a href="#k-means">K-means</a></li>
- <li><a href="#gaussian-mixture">Gaussian mixture</a></li>
- <li><a href="#power-iteration-clustering-pic">Power iteration clustering (PIC)</a></li>
- <li><a href="#latent-dirichlet-allocation-lda">Latent Dirichlet allocation (LDA)</a></li>
- <li><a href="#streaming-k-means">Streaming k-means</a></li>
-</ul>
-
-<h2 id="k-means">K-means</h2>
-
-<p><a href="http://en.wikipedia.org/wiki/K-means_clustering">k-means</a> is one of the
-most commonly used clustering algorithms that clusters the data points into a
-predefined number of clusters. The MLlib implementation includes a parallelized
-variant of the <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">k-means++</a> method
-called <a href="http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf">kmeans||</a>.
-The implementation in MLlib has the following parameters:</p>
-
-<ul>
- <li><em>k</em> is the number of desired clusters.</li>
- <li><em>maxIterations</em> is the maximum number of iterations to run.</li>
- <li><em>initializationMode</em> specifies either random initialization or
-initialization via k-means||.</li>
- <li><em>runs</em> is the number of times to run the k-means algorithm (k-means is not
-guaranteed to find a globally optimal solution, and when run multiple times on
-a given dataset, the algorithm returns the best clustering result).</li>
- <li><em>initializationSteps</em> determines the number of steps in the k-means|| algorithm.</li>
- <li><em>epsilon</em> determines the distance threshold within which we consider k-means to have converged.</li>
- <li><em>initialModel</em> is an optional set of cluster centers used for initialization. If this parameter is supplied, only one run is performed.</li>
-</ul>
-
-<p><strong>Examples</strong></p>
-
-<div class="codetabs">
-<div data-lang="scala">
- <p>The following code snippets can be executed in <code>spark-shell</code>.</p>
-
- <p>In the following example after loading and parsing data, we use the
-<a href="api/scala/index.html#org.apache.spark.mllib.clustering.KMeans"><code>KMeans</code></a> object to cluster the data
-into two clusters. The number of desired clusters is passed to the algorithm. We then compute Within
-Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing <em>k</em>. In fact the
-optimal <em>k</em> is usually one where there is an &#8220;elbow&#8221; in the WSSSE graph.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.</span><span class="o">{</span><span class="nc">KMeans</span><span class="o">,</span> <span class="nc">KMeansModel</span><span class="o">}</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
-
-<span class="c1">// Load and parse the data</span>
-<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;data/mllib/kmeans_data.txt&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">parsedData</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">s</span> <span class="k">=&gt;</span> <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="sc">&#39; &#39;</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="k">_</span><span class="o">.</span><span class="n">toDouble</span><span class="o">))).</span><span class="n">cache</span><span class="o">()</span>
-
-<span class="c1">// Cluster the data into two classes using KMeans</span>
-<span class="k">val</span> <span class="n">numClusters</span> <span class="k">=</span> <span class="mi">2</span>
-<span class="k">val</span> <span class="n">numIterations</span> <span class="k">=</span> <span class="mi">20</span>
-<span class="k">val</span> <span class="n">clusters</span> <span class="k">=</span> <span class="nc">KMeans</span><span class="o">.</span><span class="n">train</span><span class="o">(</span><span class="n">parsedData</span><span class="o">,</span> <span class="n">numClusters</span><span class="o">,</span> <span class="n">numIterations</span><span class="o">)</span>
-
-<span class="c1">// Evaluate clustering by computing Within Set Sum of Squared Errors</span>
-<span class="k">val</span> <span class="nc">WSSSE</span> <span class="k">=</span> <span class="n">clusters</span><span class="o">.</span><span class="n">computeCost</span><span class="o">(</span><span class="n">parsedData</span><span class="o">)</span>
-<span class="n">println</span><span class="o">(</span><span class="s">&quot;Within Set Sum of Squared Errors = &quot;</span> <span class="o">+</span> <span class="nc">WSSSE</span><span class="o">)</span>
-
-<span class="c1">// Save and load model</span>
-<span class="n">clusters</span><span class="o">.</span><span class="n">save</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">sameModel</span> <span class="k">=</span> <span class="nc">KMeansModel</span><span class="o">.</span><span class="n">load</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">)</span></code></pre></div>
-
- </div>
-
-<div data-lang="java">
- <p>All of MLlib&#8217;s methods use Java-friendly types, so you can import and call them there the same
-way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
-Spark Java API uses a separate <code>JavaRDD</code> class. You can convert a Java RDD to a Scala one by
-calling <code>.rdd()</code> on your <code>JavaRDD</code> object. A self-contained application example
-that is equivalent to the provided example in Scala is given below:</p>
-
- <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.spark.api.java.*</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.function.Function</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.KMeans</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.KMeansModel</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vector</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.SparkConf</span><span class="o">;</span>
-
-<span class="kd">public</span> <span class="kd">class</span> <span class="nc">KMeansExample</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">SparkConf</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SparkConf</span><span class="o">().</span><span class="na">setAppName</span><span class="o">(</span><span class="s">&quot;K-means Example&quot;</span><span class="o">);</span>
- <span class="n">JavaSparkContext</span> <span class="n">sc</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JavaSparkContext</span><span class="o">(</span><span class="n">conf</span><span class="o">);</span>
-
- <span class="c1">// Load and parse data</span>
- <span class="n">String</span> <span class="n">path</span> <span class="o">=</span> <span class="s">&quot;data/mllib/kmeans_data.txt&quot;</span><span class="o">;</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="n">path</span><span class="o">);</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">&gt;</span> <span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
- <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;()</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="n">Vector</span> <span class="nf">call</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">String</span><span class="o">[]</span> <span class="n">sarray</span> <span class="o">=</span> <span class="n">s</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">);</span>
- <span class="kt">double</span><span class="o">[]</span> <span class="n">values</span> <span class="o">=</span> <span class="k">new</span> <span class="kt">double</span><span class="o">[</span><span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">];</span>
- <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span>
- <span class="n">values</span><span class="o">[</span><span class="n">i</span><span class="o">]</span> <span class="o">=</span> <span class="n">Double</span><span class="o">.</span><span class="na">parseDouble</span><span class="o">(</span><span class="n">sarray</span><span class="o">[</span><span class="n">i</span><span class="o">]);</span>
- <span class="k">return</span> <span class="n">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="n">values</span><span class="o">);</span>
- <span class="o">}</span>
- <span class="o">}</span>
- <span class="o">);</span>
- <span class="n">parsedData</span><span class="o">.</span><span class="na">cache</span><span class="o">();</span>
-
- <span class="c1">// Cluster the data into two classes using KMeans</span>
- <span class="kt">int</span> <span class="n">numClusters</span> <span class="o">=</span> <span class="mi">2</span><span class="o">;</span>
- <span class="kt">int</span> <span class="n">numIterations</span> <span class="o">=</span> <span class="mi">20</span><span class="o">;</span>
- <span class="n">KMeansModel</span> <span class="n">clusters</span> <span class="o">=</span> <span class="n">KMeans</span><span class="o">.</span><span class="na">train</span><span class="o">(</span><span class="n">parsedData</span><span class="o">.</span><span class="na">rdd</span><span class="o">(),</span> <span class="n">numClusters</span><span class="o">,</span> <span class="n">numIterations</span><span class="o">);</span>
-
- <span class="c1">// Evaluate clustering by computing Within Set Sum of Squared Errors</span>
- <span class="kt">double</span> <span class="n">WSSSE</span> <span class="o">=</span> <span class="n">clusters</span><span class="o">.</span><span class="na">computeCost</span><span class="o">(</span><span class="n">parsedData</span><span class="o">.</span><span class="na">rdd</span><span class="o">());</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Within Set Sum of Squared Errors = &quot;</span> <span class="o">+</span> <span class="n">WSSSE</span><span class="o">);</span>
-
- <span class="c1">// Save and load model</span>
- <span class="n">clusters</span><span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">);</span>
- <span class="n">KMeansModel</span> <span class="n">sameModel</span> <span class="o">=</span> <span class="n">KMeansModel</span><span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">);</span>
- <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
-
- </div>
-
-<div data-lang="python">
- <p>The following examples can be tested in the PySpark shell.</p>
-
- <p>In the following example after loading and parsing data, we use the KMeans object to cluster the
-data into two clusters. The number of desired clusters is passed to the algorithm. We then compute
-Within Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing <em>k</em>. In
-fact the optimal <em>k</em> is usually one where there is an &#8220;elbow&#8221; in the WSSSE graph.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">pyspark.mllib.clustering</span> <span class="kn">import</span> <span class="n">KMeans</span><span class="p">,</span> <span class="n">KMeansModel</span>
-<span class="kn">from</span> <span class="nn">numpy</span> <span class="kn">import</span> <span class="n">array</span>
-<span class="kn">from</span> <span class="nn">math</span> <span class="kn">import</span> <span class="n">sqrt</span>
-
-<span class="c"># Load and parse the data</span>
-<span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;data/mllib/kmeans_data.txt&quot;</span><span class="p">)</span>
-<span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">array</span><span class="p">([</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&#39; &#39;</span><span class="p">)]))</span>
-
-<span class="c"># Build the model (cluster the data)</span>
-<span class="n">clusters</span> <span class="o">=</span> <span class="n">KMeans</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">parsedData</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="n">maxIterations</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span>
- <span class="n">runs</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">initializationMode</span><span class="o">=</span><span class="s">&quot;random&quot;</span><span class="p">)</span>
-
-<span class="c"># Evaluate clustering by computing Within Set Sum of Squared Errors</span>
-<span class="k">def</span> <span class="nf">error</span><span class="p">(</span><span class="n">point</span><span class="p">):</span>
- <span class="n">center</span> <span class="o">=</span> <span class="n">clusters</span><span class="o">.</span><span class="n">centers</span><span class="p">[</span><span class="n">clusters</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">point</span><span class="p">)]</span>
- <span class="k">return</span> <span class="n">sqrt</span><span class="p">(</span><span class="nb">sum</span><span class="p">([</span><span class="n">x</span><span class="o">**</span><span class="mi">2</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="p">(</span><span class="n">point</span> <span class="o">-</span> <span class="n">center</span><span class="p">)]))</span>
-
-<span class="n">WSSSE</span> <span class="o">=</span> <span class="n">parsedData</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">point</span><span class="p">:</span> <span class="n">error</span><span class="p">(</span><span class="n">point</span><span class="p">))</span><span class="o">.</span><span class="n">reduce</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="p">)</span>
-<span class="k">print</span><span class="p">(</span><span class="s">&quot;Within Set Sum of Squared Error = &quot;</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">WSSSE</span><span class="p">))</span>
-
-<span class="c"># Save and load model</span>
-<span class="n">clusters</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span>
-<span class="n">sameModel</span> <span class="o">=</span> <span class="n">KMeansModel</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span></code></pre></div>
-
- </div>
-
-</div>
-
-<h2 id="gaussian-mixture">Gaussian mixture</h2>
-
-<p>A <a href="http://en.wikipedia.org/wiki/Mixture_model#Multivariate_Gaussian_mixture_model">Gaussian Mixture Model</a>
-represents a composite distribution whereby points are drawn from one of <em>k</em> Gaussian sub-distributions,
-each with its own probability. The MLlib implementation uses the
-<a href="http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm">expectation-maximization</a>
- algorithm to induce the maximum-likelihood model given a set of samples. The implementation
-has the following parameters:</p>
-
-<ul>
- <li><em>k</em> is the number of desired clusters.</li>
- <li><em>convergenceTol</em> is the maximum change in log-likelihood at which we consider convergence achieved.</li>
- <li><em>maxIterations</em> is the maximum number of iterations to perform without reaching convergence.</li>
- <li><em>initialModel</em> is an optional starting point from which to start the EM algorithm. If this parameter is omitted, a random starting point will be constructed from the data.</li>
-</ul>
-
-<p><strong>Examples</strong></p>
-
-<div class="codetabs">
-<div data-lang="scala">
- <p>In the following example after loading and parsing data, we use a
-<a href="api/scala/index.html#org.apache.spark.mllib.clustering.GaussianMixture">GaussianMixture</a>
-object to cluster the data into two clusters. The number of desired clusters is passed
-to the algorithm. We then output the parameters of the mixture model.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.GaussianMixture</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.GaussianMixtureModel</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
-
-<span class="c1">// Load and parse the data</span>
-<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;data/mllib/gmm_data.txt&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">parsedData</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">s</span> <span class="k">=&gt;</span> <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="n">trim</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="sc">&#39; &#39;</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="k">_</span><span class="o">.</span><span class="n">toDouble</span><span class="o">))).</span><span class="n">cache</span><span class="o">()</span>
-
-<span class="c1">// Cluster the data into two classes using GaussianMixture</span>
-<span class="k">val</span> <span class="n">gmm</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">GaussianMixture</span><span class="o">().</span><span class="n">setK</span><span class="o">(</span><span class="mi">2</span><span class="o">).</span><span class="n">run</span><span class="o">(</span><span class="n">parsedData</span><span class="o">)</span>
-
-<span class="c1">// Save and load model</span>
-<span class="n">gmm</span><span class="o">.</span><span class="n">save</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myGMMModel&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">sameModel</span> <span class="k">=</span> <span class="nc">GaussianMixtureModel</span><span class="o">.</span><span class="n">load</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myGMMModel&quot;</span><span class="o">)</span>
-
-<span class="c1">// output parameters of max-likelihood model</span>
-<span class="k">for</span> <span class="o">(</span><span class="n">i</span> <span class="k">&lt;-</span> <span class="mi">0</span> <span class="n">until</span> <span class="n">gmm</span><span class="o">.</span><span class="n">k</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">println</span><span class="o">(</span><span class="s">&quot;weight=%f\nmu=%s\nsigma=\n%s\n&quot;</span> <span class="n">format</span>
- <span class="o">(</span><span class="n">gmm</span><span class="o">.</span><span class="n">weights</span><span class="o">(</span><span class="n">i</span><span class="o">),</span> <span class="n">gmm</span><span class="o">.</span><span class="n">gaussians</span><span class="o">(</span><span class="n">i</span><span class="o">).</span><span class="n">mu</span><span class="o">,</span> <span class="n">gmm</span><span class="o">.</span><span class="n">gaussians</span><span class="o">(</span><span class="n">i</span><span class="o">).</span><span class="n">sigma</span><span class="o">))</span>
-<span class="o">}</span></code></pre></div>
-
- </div>
-
-<div data-lang="java">
- <p>All of MLlib&#8217;s methods use Java-friendly types, so you can import and call them there the same
-way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
-Spark Java API uses a separate <code>JavaRDD</code> class. You can convert a Java RDD to a Scala one by
-calling <code>.rdd()</code> on your <code>JavaRDD</code> object. A self-contained application example
-that is equivalent to the provided example in Scala is given below:</p>
-
- <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.spark.api.java.*</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.function.Function</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.GaussianMixture</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.GaussianMixtureModel</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vector</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.SparkConf</span><span class="o">;</span>
-
-<span class="kd">public</span> <span class="kd">class</span> <span class="nc">GaussianMixtureExample</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">SparkConf</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SparkConf</span><span class="o">().</span><span class="na">setAppName</span><span class="o">(</span><span class="s">&quot;GaussianMixture Example&quot;</span><span class="o">);</span>
- <span class="n">JavaSparkContext</span> <span class="n">sc</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JavaSparkContext</span><span class="o">(</span><span class="n">conf</span><span class="o">);</span>
-
- <span class="c1">// Load and parse data</span>
- <span class="n">String</span> <span class="n">path</span> <span class="o">=</span> <span class="s">&quot;data/mllib/gmm_data.txt&quot;</span><span class="o">;</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="n">path</span><span class="o">);</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">&gt;</span> <span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
- <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;()</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="n">Vector</span> <span class="nf">call</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">String</span><span class="o">[]</span> <span class="n">sarray</span> <span class="o">=</span> <span class="n">s</span><span class="o">.</span><span class="na">trim</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">);</span>
- <span class="kt">double</span><span class="o">[]</span> <span class="n">values</span> <span class="o">=</span> <span class="k">new</span> <span class="kt">double</span><span class="o">[</span><span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">];</span>
- <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span>
- <span class="n">values</span><span class="o">[</span><span class="n">i</span><span class="o">]</span> <span class="o">=</span> <span class="n">Double</span><span class="o">.</span><span class="na">parseDouble</span><span class="o">(</span><span class="n">sarray</span><span class="o">[</span><span class="n">i</span><span class="o">]);</span>
- <span class="k">return</span> <span class="n">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="n">values</span><span class="o">);</span>
- <span class="o">}</span>
- <span class="o">}</span>
- <span class="o">);</span>
- <span class="n">parsedData</span><span class="o">.</span><span class="na">cache</span><span class="o">();</span>
-
- <span class="c1">// Cluster the data into two classes using GaussianMixture</span>
- <span class="n">GaussianMixtureModel</span> <span class="n">gmm</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">GaussianMixture</span><span class="o">().</span><span class="na">setK</span><span class="o">(</span><span class="mi">2</span><span class="o">).</span><span class="na">run</span><span class="o">(</span><span class="n">parsedData</span><span class="o">.</span><span class="na">rdd</span><span class="o">());</span>
-
- <span class="c1">// Save and load GaussianMixtureModel</span>
- <span class="n">gmm</span><span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myGMMModel&quot;</span><span class="o">);</span>
- <span class="n">GaussianMixtureModel</span> <span class="n">sameModel</span> <span class="o">=</span> <span class="n">GaussianMixtureModel</span><span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myGMMModel&quot;</span><span class="o">);</span>
- <span class="c1">// Output the parameters of the mixture model</span>
- <span class="k">for</span><span class="o">(</span><span class="kt">int</span> <span class="n">j</span><span class="o">=</span><span class="mi">0</span><span class="o">;</span> <span class="n">j</span><span class="o">&lt;</span><span class="n">gmm</span><span class="o">.</span><span class="na">k</span><span class="o">();</span> <span class="n">j</span><span class="o">++)</span> <span class="o">{</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">printf</span><span class="o">(</span><span class="s">&quot;weight=%f\nmu=%s\nsigma=\n%s\n&quot;</span><span class="o">,</span>
- <span class="n">gmm</span><span class="o">.</span><span class="na">weights</span><span class="o">()[</span><span class="n">j</span><span class="o">],</span> <span class="n">gmm</span><span class="o">.</span><span class="na">gaussians</span><span class="o">()[</span><span class="n">j</span><span class="o">].</span><span class="na">mu</span><span class="o">(),</span> <span class="n">gmm</span><span class="o">.</span><span class="na">gaussians</span><span class="o">()[</span><span class="n">j</span><span class="o">].</span><span class="na">sigma</span><span class="o">());</span>
- <span class="o">}</span>
- <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
-
- </div>
-
-<div data-lang="python">
- <p>In the following example after loading and parsing data, we use a
-<a href="api/python/pyspark.mllib.html#pyspark.mllib.clustering.GaussianMixture">GaussianMixture</a>
-object to cluster the data into two clusters. The number of desired clusters is passed
-to the algorithm. We then output the parameters of the mixture model.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">pyspark.mllib.clustering</span> <span class="kn">import</span> <span class="n">GaussianMixture</span>
-<span class="kn">from</span> <span class="nn">numpy</span> <span class="kn">import</span> <span class="n">array</span>
-
-<span class="c"># Load and parse the data</span>
-<span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;data/mllib/gmm_data.txt&quot;</span><span class="p">)</span>
-<span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">array</span><span class="p">([</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span class="o">.</span><span class="n">strip</span><span class="p">()</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&#39; &#39;</span><span class="p">)]))</span>
-
-<span class="c"># Build the model (cluster the data)</span>
-<span class="n">gmm</span> <span class="o">=</span> <span class="n">GaussianMixture</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">parsedData</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
-
-<span class="c"># output parameters of model</span>
-<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span>
- <span class="k">print</span> <span class="p">(</span><span class="s">&quot;weight = &quot;</span><span class="p">,</span> <span class="n">gmm</span><span class="o">.</span><span class="n">weights</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="s">&quot;mu = &quot;</span><span class="p">,</span> <span class="n">gmm</span><span class="o">.</span><span class="n">gaussians</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">.</span><span class="n">mu</span><span class="p">,</span>
- <span class="s">&quot;sigma = &quot;</span><span class="p">,</span> <span class="n">gmm</span><span class="o">.</span><span class="n">gaussians</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">.</span><span class="n">sigma</span><span class="o">.</span><span class="n">toArray</span><span class="p">())</span></code></pre></div>
-
- </div>
-
-</div>
-
-<h2 id="power-iteration-clustering-pic">Power iteration clustering (PIC)</h2>
-
-<p>Power iteration clustering (PIC) is a scalable and efficient algorithm for clustering vertices of a
-graph given pairwise similarties as edge properties,
-described in <a href="http://www.icml2010.org/papers/387.pdf">Lin and Cohen, Power Iteration Clustering</a>.
-It computes a pseudo-eigenvector of the normalized affinity matrix of the graph via
-<a href="http://en.wikipedia.org/wiki/Power_iteration">power iteration</a> and uses it to cluster vertices.
-MLlib includes an implementation of PIC using GraphX as its backend.
-It takes an <code>RDD</code> of <code>(srcId, dstId, similarity)</code> tuples and outputs a model with the clustering assignments.
-The similarities must be nonnegative.
-PIC assumes that the similarity measure is symmetric.
-A pair <code>(srcId, dstId)</code> regardless of the ordering should appear at most once in the input data.
-If a pair is missing from input, their similarity is treated as zero.
-MLlib&#8217;s PIC implementation takes the following (hyper-)parameters:</p>
-
-<ul>
- <li><code>k</code>: number of clusters</li>
- <li><code>maxIterations</code>: maximum number of power iterations</li>
- <li><code>initializationMode</code>: initialization model. This can be either &#8220;random&#8221;, which is the default,
-to use a random vector as vertex properties, or &#8220;degree&#8221; to use normalized sum similarities.</li>
-</ul>
-
-<p><strong>Examples</strong></p>
-
-<p>In the following, we show code snippets to demonstrate how to use PIC in MLlib.</p>
-
-<div class="codetabs">
-<div data-lang="scala">
-
- <p><a href="api/scala/index.html#org.apache.spark.mllib.clustering.PowerIterationClustering"><code>PowerIterationClustering</code></a>
-implements the PIC algorithm.
-It takes an <code>RDD</code> of <code>(srcId: Long, dstId: Long, similarity: Double)</code> tuples representing the
-affinity matrix.
-Calling <code>PowerIterationClustering.run</code> returns a
-<a href="api/scala/index.html#org.apache.spark.mllib.clustering.PowerIterationClusteringModel"><code>PowerIterationClusteringModel</code></a>,
-which contains the computed clustering assignments.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.</span><span class="o">{</span><span class="nc">PowerIterationClustering</span><span class="o">,</span> <span class="nc">PowerIterationClusteringModel</span><span class="o">}</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
-
-<span class="c1">// Load and parse the data</span>
-<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;data/mllib/pic_data.txt&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">similarities</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span> <span class="o">{</span> <span class="n">line</span> <span class="k">=&gt;</span>
- <span class="k">val</span> <span class="n">parts</span> <span class="k">=</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="sc">&#39; &#39;</span><span class="o">)</span>
- <span class="o">(</span><span class="n">parts</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="n">toLong</span><span class="o">,</span> <span class="n">parts</span><span class="o">(</span><span class="mi">1</span><span class="o">).</span><span class="n">toLong</span><span class="o">,</span> <span class="n">parts</span><span class="o">(</span><span class="mi">2</span><span class="o">).</span><span class="n">toDouble</span><span class="o">)</span>
-<span class="o">}</span>
-
-<span class="c1">// Cluster the data into two classes using PowerIterationClustering</span>
-<span class="k">val</span> <span class="n">pic</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">PowerIterationClustering</span><span class="o">()</span>
- <span class="o">.</span><span class="n">setK</span><span class="o">(</span><span class="mi">2</span><span class="o">)</span>
- <span class="o">.</span><span class="n">setMaxIterations</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">model</span> <span class="k">=</span> <span class="n">pic</span><span class="o">.</span><span class="n">run</span><span class="o">(</span><span class="n">similarities</span><span class="o">)</span>
-
-<span class="n">model</span><span class="o">.</span><span class="n">assignments</span><span class="o">.</span><span class="n">foreach</span> <span class="o">{</span> <span class="n">a</span> <span class="k">=&gt;</span>
- <span class="n">println</span><span class="o">(</span><span class="n">s</span><span class="s">&quot;${a.id} -&gt; ${a.cluster}&quot;</span><span class="o">)</span>
-<span class="o">}</span>
-
-<span class="c1">// Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span class="n">save</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">sameModel</span> <span class="k">=</span> <span class="nc">PowerIterationClusteringModel</span><span class="o">.</span><span class="n">load</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">)</span></code></pre></div>
-
- <p>A full example that produces the experiment described in the PIC paper can be found under
-<a href="https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala"><code>examples/</code></a>.</p>
-
- </div>
-
-<div data-lang="java">
-
- <p><a href="api/java/org/apache/spark/mllib/clustering/PowerIterationClustering.html"><code>PowerIterationClustering</code></a>
-implements the PIC algorithm.
-It takes an <code>JavaRDD</code> of <code>(srcId: Long, dstId: Long, similarity: Double)</code> tuples representing the
-affinity matrix.
-Calling <code>PowerIterationClustering.run</code> returns a
-<a href="api/java/org/apache/spark/mllib/clustering/PowerIterationClusteringModel.html"><code>PowerIterationClusteringModel</code></a>
-which contains the computed clustering assignments.</p>
-
- <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">scala.Tuple2</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">scala.Tuple3</span><span class="o">;</span>
-
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.JavaRDD</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.function.Function</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.PowerIterationClustering</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.PowerIterationClusteringModel</span><span class="o">;</span>
-
-<span class="c1">// Load and parse the data</span>
-<span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">&quot;data/mllib/pic_data.txt&quot;</span><span class="o">);</span>
-<span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Tuple3</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;&gt;</span> <span class="n">similarities</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
- <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Tuple3</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="n">Tuple3</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="nf">call</span><span class="o">(</span><span class="n">String</span> <span class="n">line</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">String</span><span class="o">[]</span> <span class="n">parts</span> <span class="o">=</span> <span class="n">line</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">);</span>
- <span class="k">return</span> <span class="k">new</span> <span class="n">Tuple3</span><span class="o">&lt;&gt;(</span><span class="k">new</span> <span class="nf">Long</span><span class="o">(</span><span class="n">parts</span><span class="o">[</span><span class="mi">0</span><span class="o">]),</span> <span class="k">new</span> <span class="nf">Long</span><span class="o">(</span><span class="n">parts</span><span class="o">[</span><span class="mi">1</span><span class="o">]),</span> <span class="k">new</span> <span class="nf">Double</span><span class="o">(</span><span class="n">parts</span><span class="o">[</span><span class="mi">2</span><span class="o">]));</span>
- <span class="o">}</span>
- <span class="o">}</span>
-<span class="o">);</span>
-
-<span class="c1">// Cluster the data into two classes using PowerIterationClustering</span>
-<span class="n">PowerIterationClustering</span> <span class="n">pic</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">PowerIterationClustering</span><span class="o">()</span>
- <span class="o">.</span><span class="na">setK</span><span class="o">(</span><span class="mi">2</span><span class="o">)</span>
- <span class="o">.</span><span class="na">setMaxIterations</span><span class="o">(</span><span class="mi">10</span><span class="o">);</span>
-<span class="n">PowerIterationClusteringModel</span> <span class="n">model</span> <span class="o">=</span> <span class="n">pic</span><span class="o">.</span><span class="na">run</span><span class="o">(</span><span class="n">similarities</span><span class="o">);</span>
-
-<span class="k">for</span> <span class="o">(</span><span class="n">PowerIterationClustering</span><span class="o">.</span><span class="na">Assignment</span> <span class="nl">a:</span> <span class="n">model</span><span class="o">.</span><span class="na">assignments</span><span class="o">().</span><span class="na">toJavaRDD</span><span class="o">().</span><span class="na">collect</span><span class="o">())</span> <span class="o">{</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">a</span><span class="o">.</span><span class="na">id</span><span class="o">()</span> <span class="o">+</span> <span class="s">&quot; -&gt; &quot;</span> <span class="o">+</span> <span class="n">a</span><span class="o">.</span><span class="na">cluster</span><span class="o">());</span>
-<span class="o">}</span>
-
-<span class="c1">// Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">);</span>
-<span class="n">PowerIterationClusteringModel</span> <span class="n">sameModel</span> <span class="o">=</span> <span class="n">PowerIterationClusteringModel</span><span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myModelPath&quot;</span><span class="o">);</span></code></pre></div>
-
- </div>
-
-<div data-lang="python">
-
- <p><a href="api/python/pyspark.mllib.html#pyspark.mllib.clustering.PowerIterationClustering"><code>PowerIterationClustering</code></a>
-implements the PIC algorithm.
-It takes an <code>RDD</code> of <code>(srcId: Long, dstId: Long, similarity: Double)</code> tuples representing the
-affinity matrix.
-Calling <code>PowerIterationClustering.run</code> returns a
-<a href="api/python/pyspark.mllib.html#pyspark.mllib.clustering.PowerIterationClustering"><code>PowerIterationClusteringModel</code></a>,
-which contains the computed clustering assignments.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">__future__</span> <span class="kn">import</span> <span class="n">print_function</span>
-<span class="kn">from</span> <span class="nn">pyspark.mllib.clustering</span> <span class="kn">import</span> <span class="n">PowerIterationClustering</span><span class="p">,</span> <span class="n">PowerIterationClusteringModel</span>
-
-<span class="c"># Load and parse the data</span>
-<span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;data/mllib/pic_data.txt&quot;</span><span class="p">)</span>
-<span class="n">similarities</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="nb">tuple</span><span class="p">([</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&#39; &#39;</span><span class="p">)]))</span>
-
-<span class="c"># Cluster the data into two classes using PowerIterationClustering</span>
-<span class="n">model</span> <span class="o">=</span> <span class="n">PowerIterationClustering</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">similarities</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">10</span><span class="p">)</span>
-
-<span class="n">model</span><span class="o">.</span><span class="n">assignments</span><span class="p">()</span><span class="o">.</span><span class="n">foreach</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="k">print</span><span class="p">(</span><span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="o">.</span><span class="n">id</span><span class="p">)</span> <span class="o">+</span> <span class="s">&quot; -&gt; &quot;</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="o">.</span><span class="n">cluster</span><span class="p">)))</span>
-
-<span class="c"># Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span>
-<span class="n">sameModel</span> <span class="o">=</span> <span class="n">PowerIterationClusteringModel</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span></code></pre></div>
-
- </div>
-
-</div>
-
-<h2 id="latent-dirichlet-allocation-lda">Latent Dirichlet allocation (LDA)</h2>
-
-<p><a href="http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation">Latent Dirichlet allocation (LDA)</a>
-is a topic model which infers topics from a collection of text documents.
-LDA can be thought of as a clustering algorithm as follows:</p>
-
-<ul>
- <li>Topics correspond to cluster centers, and documents correspond to
-examples (rows) in a dataset.</li>
- <li>Topics and documents both exist in a feature space, where feature
-vectors are vectors of word counts (bag of words).</li>
- <li>Rather than estimating a clustering using a traditional distance, LDA
-uses a function based on a statistical model of how text documents are
-generated.</li>
-</ul>
-
-<p>LDA supports different inference algorithms via <code>setOptimizer</code> function.
-<code>EMLDAOptimizer</code> learns clustering using
-<a href="http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm">expectation-maximization</a>
-on the likelihood function and yields comprehensive results, while
-<code>OnlineLDAOptimizer</code> uses iterative mini-batch sampling for <a href="https://www.cs.princeton.edu/~blei/papers/HoffmanBleiBach2010b.pdf">online
-variational
-inference</a>
-and is generally memory friendly.</p>
-
-<p>LDA takes in a collection of documents as vectors of word counts and the
-following parameters (set using the builder pattern):</p>
-
-<ul>
- <li><code>k</code>: Number of topics (i.e., cluster centers)</li>
- <li><code>optimizer</code>: Optimizer to use for learning the LDA model, either
-<code>EMLDAOptimizer</code> or <code>OnlineLDAOptimizer</code></li>
- <li><code>docConcentration</code>: Dirichlet parameter for prior over documents&#8217;
-distributions over topics. Larger values encourage smoother inferred
-distributions.</li>
- <li><code>topicConcentration</code>: Dirichlet parameter for prior over topics&#8217;
-distributions over terms (words). Larger values encourage smoother
-inferred distributions.</li>
- <li><code>maxIterations</code>: Limit on the number of iterations.</li>
- <li><code>checkpointInterval</code>: If using checkpointing (set in the Spark
-configuration), this parameter specifies the frequency with which
-checkpoints will be created. If <code>maxIterations</code> is large, using
-checkpointing can help reduce shuffle file sizes on disk and help with
-failure recovery.</li>
-</ul>
-
-<p>All of MLlib&#8217;s LDA models support:</p>
-
-<ul>
- <li><code>describeTopics</code>: Returns topics as arrays of most important terms and
-term weights</li>
- <li><code>topicsMatrix</code>: Returns a <code>vocabSize</code> by <code>k</code> matrix where each column
-is a topic</li>
-</ul>
-
-<p><em>Note</em>: LDA is still an experimental feature under active development.
-As a result, certain features are only available in one of the two
-optimizers / models generated by the optimizer. Currently, a distributed
-model can be converted into a local model, but not vice-versa.</p>
-
-<p>The following discussion will describe each optimizer/model pair
-separately.</p>
-
-<p><strong>Expectation Maximization</strong></p>
-
-<p>Implemented in
-<a href="api/scala/index.html#org.apache.spark.mllib.clustering.EMLDAOptimizer"><code>EMLDAOptimizer</code></a>
-and
-<a href="api/scala/index.html#org.apache.spark.mllib.clustering.DistributedLDAModel"><code>DistributedLDAModel</code></a>.</p>
-
-<p>For the parameters provided to <code>LDA</code>:</p>
-
-<ul>
- <li><code>docConcentration</code>: Only symmetric priors are supported, so all values
-in the provided <code>k</code>-dimensional vector must be identical. All values
-must also be $&gt; 1.0$. Providing <code>Vector(-1)</code> results in default behavior
-(uniform <code>k</code> dimensional vector with value $(50 / k) + 1$</li>
- <li><code>topicConcentration</code>: Only symmetric priors supported. Values must be
-$&gt; 1.0$. Providing <code>-1</code> results in defaulting to a value of $0.1 + 1$.</li>
- <li><code>maxIterations</code>: The maximum number of EM iterations.</li>
-</ul>
-
-<p><code>EMLDAOptimizer</code> produces a <code>DistributedLDAModel</code>, which stores not only
-the inferred topics but also the full training corpus and topic
-distributions for each document in the training corpus. A
-<code>DistributedLDAModel</code> supports:</p>
-
-<ul>
- <li><code>topTopicsPerDocument</code>: The top topics and their weights for
- each document in the training corpus</li>
- <li><code>topDocumentsPerTopic</code>: The top documents for each topic and
- the corresponding weight of the topic in the documents.</li>
- <li><code>logPrior</code>: log probability of the estimated topics and
- document-topic distributions given the hyperparameters
- <code>docConcentration</code> and <code>topicConcentration</code></li>
- <li><code>logLikelihood</code>: log likelihood of the training corpus, given the
- inferred topics and document-topic distributions</li>
-</ul>
-
-<p><strong>Online Variational Bayes</strong></p>
-
-<p>Implemented in
-<a href="api/scala/org/apache/spark/mllib/clustering/OnlineLDAOptimizer.html"><code>OnlineLDAOptimizer</code></a>
-and
-<a href="api/scala/org/apache/spark/mllib/clustering/LocalLDAModel.html"><code>LocalLDAModel</code></a>.</p>
-
-<p>For the parameters provided to <code>LDA</code>:</p>
-
-<ul>
- <li><code>docConcentration</code>: Asymmetric priors can be used by passing in a
-vector with values equal to the Dirichlet parameter in each of the <code>k</code>
-dimensions. Values should be $&gt;= 0$. Providing <code>Vector(-1)</code> results in
-default behavior (uniform <code>k</code> dimensional vector with value $(1.0 / k)$)</li>
- <li><code>topicConcentration</code>: Only symmetric priors supported. Values must be
-$&gt;= 0$. Providing <code>-1</code> results in defaulting to a value of $(1.0 / k)$.</li>
- <li><code>maxIterations</code>: Maximum number of minibatches to submit.</li>
-</ul>
-
-<p>In addition, <code>OnlineLDAOptimizer</code> accepts the following parameters:</p>
-
-<ul>
- <li><code>miniBatchFraction</code>: Fraction of corpus sampled and used at each
-iteration</li>
- <li><code>optimizeDocConcentration</code>: If set to true, performs maximum-likelihood
-estimation of the hyperparameter <code>docConcentration</code> (aka <code>alpha</code>)
-after each minibatch and sets the optimized <code>docConcentration</code> in the
-returned <code>LocalLDAModel</code></li>
- <li><code>tau0</code> and <code>kappa</code>: Used for learning-rate decay, which is computed by
-$(\tau_0 + iter)^{-\kappa}$ where $iter$ is the current number of iterations.</li>
-</ul>
-
-<p><code>OnlineLDAOptimizer</code> produces a <code>LocalLDAModel</code>, which only stores the
-inferred topics. A <code>LocalLDAModel</code> supports:</p>
-
-<ul>
- <li><code>logLikelihood(documents)</code>: Calculates a lower bound on the provided
-<code>documents</code> given the inferred topics.</li>
- <li><code>logPerplexity(documents)</code>: Calculates an upper bound on the
-perplexity of the provided <code>documents</code> given the inferred topics.</li>
-</ul>
-
-<p><strong>Examples</strong></p>
-
-<p>In the following example, we load word count vectors representing a corpus of documents.
-We then use <a href="api/scala/index.html#org.apache.spark.mllib.clustering.LDA">LDA</a>
-to infer three topics from the documents. The number of desired clusters is passed
-to the algorithm. We then output the topics, represented as probability distributions over words.</p>
-
-<div class="codetabs">
-<div data-lang="scala">
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.</span><span class="o">{</span><span class="nc">LDA</span><span class="o">,</span> <span class="nc">DistributedLDAModel</span><span class="o">}</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
-
-<span class="c1">// Load and parse the data</span>
-<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;data/mllib/sample_lda_data.txt&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">parsedData</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">s</span> <span class="k">=&gt;</span> <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="n">trim</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="sc">&#39; &#39;</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="k">_</span><span class="o">.</span><span class="n">toDouble</span><span class="o">)))</span>
-<span class="c1">// Index documents with unique IDs</span>
-<span class="k">val</span> <span class="n">corpus</span> <span class="k">=</span> <span class="n">parsedData</span><span class="o">.</span><span class="n">zipWithIndex</span><span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="k">_</span><span class="o">.</span><span class="n">swap</span><span class="o">).</span><span class="n">cache</span><span class="o">()</span>
-
-<span class="c1">// Cluster the documents into three topics using LDA</span>
-<span class="k">val</span> <span class="n">ldaModel</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LDA</span><span class="o">().</span><span class="n">setK</span><span class="o">(</span><span class="mi">3</span><span class="o">).</span><span class="n">run</span><span class="o">(</span><span class="n">corpus</span><span class="o">)</span>
-
-<span class="c1">// Output topics. Each is a distribution over words (matching word count vectors)</span>
-<span class="n">println</span><span class="o">(</span><span class="s">&quot;Learned topics (as distributions over vocab of &quot;</span> <span class="o">+</span> <span class="n">ldaModel</span><span class="o">.</span><span class="n">vocabSize</span> <span class="o">+</span> <span class="s">&quot; words):&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">topics</span> <span class="k">=</span> <span class="n">ldaModel</span><span class="o">.</span><span class="n">topicsMatrix</span>
-<span class="k">for</span> <span class="o">(</span><span class="n">topic</span> <span class="k">&lt;-</span> <span class="nc">Range</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="mi">3</span><span class="o">))</span> <span class="o">{</span>
- <span class="n">print</span><span class="o">(</span><span class="s">&quot;Topic &quot;</span> <span class="o">+</span> <span class="n">topic</span> <span class="o">+</span> <span class="s">&quot;:&quot;</span><span class="o">)</span>
- <span class="k">for</span> <span class="o">(</span><span class="n">word</span> <span class="k">&lt;-</span> <span class="nc">Range</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="n">ldaModel</span><span class="o">.</span><span class="n">vocabSize</span><span class="o">))</span> <span class="o">{</span> <span class="n">print</span><span class="o">(</span><span class="s">&quot; &quot;</span> <span class="o">+</span> <span class="n">topics</span><span class="o">(</span><span class="n">word</span><span class="o">,</span> <span class="n">topic</span><span class="o">));</span> <span class="o">}</span>
- <span class="n">println</span><span class="o">()</span>
-<span class="o">}</span>
-
-<span class="c1">// Save and load model.</span>
-<span class="n">ldaModel</span><span class="o">.</span><span class="n">save</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myLDAModel&quot;</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">sameModel</span> <span class="k">=</span> <span class="nc">DistributedLDAModel</span><span class="o">.</span><span class="n">load</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;myLDAModel&quot;</span><span class="o">)</span></code></pre></div>
-
- </div>
-
-<div data-lang="java">
-
- <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">scala.Tuple2</span><span class="o">;</span>
-
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.*</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.function.Function</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.DistributedLDAModel</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.clustering.LDA</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Matrix</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vector</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.spark.SparkConf</span><span class="o">;</span>
-
-<span class="kd">public</span> <span class="kd">class</span> <span class="nc">JavaLDAExample</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">SparkConf</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SparkConf</span><span class="o">().</span><span class="na">setAppName</span><span class="o">(</span><span class="s">&quot;LDA Example&quot;</span><span class="o">);</span>
- <span class="n">JavaSparkContext</span> <span class="n">sc</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JavaSparkContext</span><span class="o">(</span><span class="n">conf</span><span class="o">);</span>
-
- <span class="c1">// Load and parse the data</span>
- <span class="n">String</span> <span class="n">path</span> <span class="o">=</span> <span class="s">&quot;data/mllib/sample_lda_data.txt&quot;</span><span class="o">;</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="n">path</span><span class="o">);</span>
- <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">&gt;</span> <span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
- <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;()</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="n">Vector</span> <span class="nf">call</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span>
- <span class="n">String</span><span class="o">[]</span> <span class="n">sarray</span> <span class="o">=</span> <span class="n">s</span><span class="o">.</span><span class="na">trim</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">);</span>
- <span class="kt">double</span><span class="o">[]</span> <span class="n">values</span> <span class="o">=</span> <span class="k">new</span> <span class="kt">double</span><span class="o">[</span><span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">];</span>
- <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">sarray</span><span class="o">.</span><span class="na">length</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span>
- <span class="n">values</span><span class="o">[</span><span class="n">i</span><span class="o">]</span> <span class="o">=</span> <span class="n">Double</span><span class="o">.</span><span class="na">parseDouble</span><span class="o">(</span><span class="n">sarray</span><span class="o">[</span><span class="n">i</span><span class="o">]);</span>
- <span class="k">return</span> <span class="n">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="n">values</span><span class="o">);</span>
- <span class="o">}</span>
- <span class="o">}</span>
- <span class="o">);</span>
- <span class="c1">// Index documents with unique IDs</span>
- <span class="n">JavaPairRDD</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;</span> <span class="n">corpus</span> <span class="o">=</span> <span class="n">JavaPairRDD</span><span class="o">.</span><span class="na">fromJavaRDD</span><span class="o">(</span><span class="n">parsedData</span><span class="o">.</span><span class="na">zipWithIndex</span><span class="o">().</span><span class="na">map</span><span class="o">(</span>
- <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;,</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
- <span class="kd">public</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;</span> <span class="nf">call</span><span class="o">(</span><span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">doc_id</span><span class="o">)</span> <span class="o">{</span>
- <span class="k">return</span> <span class="n">doc_id</span><span class="o">.</span><span class="na">swap</span><span class="o">();</span>
- <span class="o">}</span>
- <span class="o">}</span>
- <span class="o">));</span>
- <span class="n">corpus</span><span class="o">.</span><span class="na">cache</span><span class="o">();</span>
-
- <span class="c1">// Cluster the documents into three topics using LDA</span>
- <span class="n">DistributedLDAModel</span> <span class="n">ldaModel</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LDA</span><span class="o">().</span><span class="na">setK</span><span class="o">(</span><span class="mi">3</span><span class="o">).</span><span class="na">run</span><span class="o">(</span><span class="n">corpus</span><span class="o">);</span>
-
- <span class="c1">// Output topics. Each is a distribution over words (matching word count vectors)</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Learned topics (as distributions over vocab of &quot;</span> <span class="o">+</span> <span class="n">ldaModel</span><span class="o">.</span><span class="na">vocabSize</span><span class="o">()</span>
- <span class="o">+</span> <span class="s">&quot; words):&quot;</span><span class="o">);</span>
- <span class="n">Matrix</span> <span class="n">topics</span> <span class="o">=</span> <span class="n">ldaModel</span><span class="o">.</span><span class="na">topicsMatrix</span><span class="o">();</span>
- <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">topic</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">topic</span> <span class="o">&lt;</span> <span class="mi">3</span><span class="o">;</span> <span class="n">topic</span><span class="o">++)</span> <span class="o">{</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">print</span><span class="o">(</span><span class="s">&quot;Topic &quot;</span> <span class="o">+</span> <span class="n">topic</span> <span class="o">+</span> <span class="s">&quot;:&quot;</span><span class="o">);</span>
- <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">word</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">word</span> <span class="o">&lt;</span> <span class="n">ldaModel</span><span class="o">.</span><span class="na">vocabSize</span><span class="o">();</span> <span class="n">word</span><span class="o">++)</span> <span class="o">{</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">print</span><span class="o">(</span><span class="s">&quot; &quot;</span> <span class="o">+</span> <span class="n">topics</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">word</span><span class="o">,</span> <span class="n">topic</span><span class="o">));</span>
- <span class="o">}</span>
- <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">();</span>
- <span class="o">}</span>
-
- <span class="n">ldaModel</span><span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myLDAModel&quot;</span><span class="o">);</span>
- <span class="n">DistributedLDAModel</span> <span class="n">sameModel</span> <span class="o">=</span> <span class="n">DistributedLDAModel</span><span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">sc</span><span class="o">.</span><span class="na">sc</span><span class="o">(),</span> <span class="s">&quot;myLDAModel&quot;</span><span class="o">);</span>
- <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
-
- </div>
-
-<div data-lang="python">
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">pyspark.mllib.clustering</span> <span class="kn">import</span> <span class="n">LDA</span><span class="p">,</span> <span class="n">LDAModel</span>
-<span class="kn">from</span> <span class="nn">pyspark.mllib.linalg</span> <span class="kn">import</span> <span class="n">Vectors</span>
-
-<span class="c"># Load and parse the data</span>
-<span class="n">data</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;data/mllib/sample_lda_data.txt&quot;</span><span class="p">)</span>
-<span class="n">parsedData</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="p">([</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span class="o">.</span><span class="n">strip</span><span class="p">()</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&#39; &#39;</span><span class="p">)]))</span>
-<span class="c"># Index documents with unique IDs</span>
-<span class="n">corpus</span> <span class="o">=</span> <span class="n">parsedData</span><span class="o">.</span><span class="n">zipWithIndex</span><span class="p">()</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="p">[</span><span class="n">x</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">x</span><span class="p">[</span><span class="mi">0</span><span class="p">]])</span><span class="o">.</span><span class="n">cache</span><span class="p">()</span>
-
-<span class="c"># Cluster the documents into three topics using LDA</span>
-<span class="n">ldaModel</span> <span class="o">=</span> <span class="n">LDA</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">corpus</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
-
-<span class="c"># Output topics. Each is a distribution over words (matching word count vectors)</span>
-<span class="k">print</span><span class="p">(</span><span class="s">&quot;Learned topics (as distributions over vocab of &quot;</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">ldaModel</span><span class="o">.</span><span class="n">vocabSize</span><span class="p">())</span> <span class="o">+</span> <span class="s">&quot; words):&quot;</span><span class="p">)</span>
-<span class="n">topics</span> <span class="o">=</span> <span class="n">ldaModel</span><span class="o">.</span><span class="n">topicsMatrix</span><span class="p">()</span>
-<span class="k">for</span> <span class="n">topic</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span>
- <span class="k">print</span><span class="p">(</span><span class="s">&quot;Topic &quot;</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">topic</span><span class="p">)</span> <span class="o">+</span> <span class="s">&quot;:&quot;</span><span class="p">)</span>
- <span class="k">for</span> <span class="n">word</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">ldaModel</span><span class="o">.</span><span class="n">vocabSize</span><span class="p">()):</span>
- <span class="k">print</span><span class="p">(</span><span class="s">&quot; &quot;</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">topics</span><span class="p">[</span><span class="n">word</span><span class="p">][</span><span class="n">topic</span><span class="p">]))</span>
-
-<span class="c"># Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span>
-<span class="n">sameModel</span> <span class="o">=</span> <span class="n">LDAModel</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">sc</span><span class="p">,</span> <span class="s">&quot;myModelPath&quot;</span><span class="p">)</span></code></pre></div>
-
- </div>
-
-</div>
-
-<h2 id="streaming-k-means">Streaming k-means</h2>
-
-<p>When data arrive in a stream, we may want to estimate clusters dynamically,
-updating them as new data arrive. MLlib provides support for streaming k-means clustering,
-with parameters to control the decay (or &#8220;forgetfulness&#8221;) of the estimates. The algorithm
-uses a generalization of the mini-batch k-means update rule. For each batch of data, we assign
-all points to their nearest cluster, compute new cluster centers, then update each cluster using:</p>
-
-<p><code>\begin{equation}
- c_{t+1} = \frac{c_tn_t\alpha + x_tm_t}{n_t\alpha+m_t}
-\end{equation}</code>
-<code>\begin{equation}
- n_{t+1} = n_t + m_t
-\end{equation}</code></p>
-
-<p>Where <code>$c_t$</code> is the previous center for the cluster, <code>$n_t$</code> is the number of points assigned
-to the cluster thus far, <code>$x_t$</code> is the new cluster center from the current batch, and <code>$m_t$</code>
-is the number of points added to the cluster in the current batch. The decay factor <code>$\alpha$</code>
-can be used to ignore the past: with <code>$\alpha$=1</code> all data will be used from the beginning;
-with <code>$\alpha$=0</code> only the most recent data will be used. This is analogous to an
-exponentially-weighted moving average.</p>
-
-<p>The decay can be specified using a <code>halfLife</code> parameter, which determines the
-correct decay factor <code>a</code> such that, for data acquired
-at time <code>t</code>, its contribution by time <code>t + halfLife</code> will have dropped to 0.5.
-The unit of time can be specified either as <code>batches</code> or <code>points</code> and the update rule
-will be adjusted accordingly.</p>
-
-<p><strong>Examples</strong></p>
-
-<p>This example shows how to estimate clusters on streaming data.</p>
-
-<div class="codetabs">
-
-<div data-lang="scala">
-
- <p>First we import the neccessary classes.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.regression.LabeledPoint</span>
-<span class="k">import</span> <span class="nn">org.apache.spark.mllib.clustering.StreamingKMeans</span></code></pre></div>
-
- <p>Then we make an input stream of vectors for training, as well as a stream of labeled data
-points for testing. We assume a StreamingContext <code>ssc</code> has been created, see
-<a href="streaming-programming-guide.html#initializing">Spark Streaming Programming Guide</a> for more info.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">trainingData</span> <span class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span class="n">textFileStream</span><span class="o">(</span><span class="s">&quot;/training/data/dir&quot;</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="nc">Vectors</span><span class="o">.</span><span class="n">parse</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">testData</span> <span class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span class="n">textFileStream</span><span class="o">(</span><span class="s">&quot;/testing/data/dir&quot;</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="nc">LabeledPoint</span><span class="o">.</span><span class="n">parse</span><span class="o">)</span></code></pre></div>
-
- <p>We create a model with random clusters and specify the number of clusters to find</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">numDimensions</span> <span class="k">=</span> <span class="mi">3</span>
-<span class="k">val</span> <span class="n">numClusters</span> <span class="k">=</span> <span class="mi">2</span>
-<span class="k">val</span> <span class="n">model</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">StreamingKMeans</span><span class="o">()</span>
- <span class="o">.</span><span class="n">setK</span><span class="o">(</span><span class="n">numClusters</span><span class="o">)</span>
- <span class="o">.</span><span class="n">setDecayFactor</span><span class="o">(</span><span class="mf">1.0</span><span class="o">)</span>
- <span class="o">.</span><span class="n">setRandomCenters</span><span class="o">(</span><span class="n">numDimensions</span><span class="o">,</span> <span class="mf">0.0</span><span class="o">)</span></code></pre></div>
-
- <p>Now register the streams for training and testing and start the job, printing
-the predicted cluster assignments on new data points as they arrive.</p>
-
- <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="n">model</span><span class="o">.</span><span class="n">trainOn</span><span class="o">(</span><span class="n">trainingData</span><span class="o">)</span>
-<span class="n">model</span><span class="o">.</span><span class="n">predictOnValues</span><span class="o">(</span><span class="n">testData</span><span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">lp</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">lp</span><span class="o">.</span><span class="n">label</span><span class="o">,</span> <span class="n">lp</span><span class="o">.</span><span class="n">features</span><span class="o">))).</span><span class="n">print</span><span class="o">()</span>
-
-<span class="n">ssc</span><span class="o">.</span><span class="n">start</span><span class="o">()</span>
-<span class="n">ssc</span><span class="o">.</span><span class="n">awaitTermination</span><span class="o">()</span></code></pre></div>
-
- </div>
-
-<div data-lang="python">
- <p>First we import the neccessary classes.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">pyspark.mllib.linalg</span> <span class="kn">import</span> <span class="n">Vectors</span>
-<span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> <span class="kn">import</span> <span class="n">LabeledPoint</span>
-<span class="kn">from</span> <span class="nn">pyspark.mllib.clustering</span> <span class="kn">import</span> <span class="n">StreamingKMeans</span></code></pre></div>
-
- <p>Then we make an input stream of vectors for training, as well as a stream of labeled data
-points for testing. We assume a StreamingContext <code>ssc</code> has been created, see
-<a href="streaming-programming-guide.html#initializing">Spark Streaming Programming Guide</a> for more info.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="n">lp</span><span class="p">):</span>
- <span class="n">label</span> <span class="o">=</span> <span class="nb">float</span><span class="p">(</span><span class="n">lp</span><span class="p">[</span><span class="n">lp</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s">&#39;(&#39;</span><span class="p">)</span> <span class="o">+</span> <span class="mi">1</span><span class="p">:</span> <span class="n">lp</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s">&#39;,&#39;</span><span class="p">)])</span>
- <span class="n">vec</span> <span class="o">=</span> <span class="n">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="p">(</span><span class="n">lp</span><span class="p">[</span><span class="n">lp</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s">&#39;[&#39;</span><span class="p">)</span> <span class="o">+</span> <span class="mi">1</span><span class="p">:</span> <span class="n">lp</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s">&#39;]&#39;</span><span class="p">)]</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&#39;,&#39;</span><span class="p">))</span>
- <span class="k">return</span> <span class="n">LabeledPoint</span><span class="p">(</span><span class="n">label</span><span class="p">,</span> <span class="n">vec</span><span class="p">)</span>
-
-<span class="n">trainingData</span> <span class="o">=</span> <span class="n">ssc</span><span class="o">.</span><span class="n">textFileStream</span><span class="p">(</span><span class="s">&quot;/training/data/dir&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="n">Vectors</span><span class="o">.</span><span class="n">parse</span><span class="p">)</span>
-<span class="n">testData</span> <span class="o">=</span> <span class="n">ssc</span><span class="o">.</span><span class="n">textFileStream</span><span class="p">(</span><span class="s">&quot;/testing/data/dir&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="n">parse</span><span class="p">)</span></code></pre></div>
-
- <p>We create a model with random clusters and specify the number of clusters to find</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">model</span> <span class="o">=</span> <span class="n">StreamingKMeans</span><span class="p">(</span><span class="n">k</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">decayFactor</span><span class="o">=</span><span class="mf">1.0</span><span class="p">)</span><span class="o">.</span><span class="n">setRandomCenters</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span></code></pre></div>
-
- <p>Now register the streams for training and testing and start the job, printing
-the predicted cluster assignments on new data points as they arrive.</p>
-
- <div class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">model</span><span class="o">.</span><span class="n">trainOn</span><span class="p">(</span><span class="n">trainingData</span><span class="p">)</span>
-<span class="k">print</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">predictOnValues</span><span class="p">(</span><span class="n">testData</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">lp</span><span class="p">:</span> <span class="p">(</span><span class="n">lp</span><span class="o">.</span><span class="n">label</span><span class="p">,</span> <span class="n">lp</span><span class="o">.</span><span class="n">features</span><span class="p">))))</span>
-
-<span class="n">ssc</span><span class="o">.</span><span class="n">start</span><span class="p">()</span>
-<span class="n">ssc</span><span class="o">.</span><span class="n">awaitTermination</span><span class="p">()</span></code></pre></div>
-
- </div>
-
-</div>
-
-<p>As you add new text files with data the cluster centers will update. Each training
-point should be formatted as <code>[x1, x2, x3]</code>, and each test data point
-should be formatted as <code>(y, [x1, x2, x3])</code>, where <code>y</code> is some useful label or identifier
-(e.g. a true category assignment). Anytime a text file is placed in <code>/training/data/dir</code>
-the model will update. Anytime a text file is placed in <code>/testing/data/dir</code>
-you will see predictions. With new data, the cluster centers will change!</p>
-
-
- </div> <!-- /container -->
-
- <script src="js/vendor/jquery-1.8.0.min.js"></script>
- <script src="js/vendor/bootstrap.min.js"></script>
- <script src="js/vendor/anchor.min.js"></script>
- <script src="js/main.js"></script>
-
- <!-- MathJax Section -->
- <script type="text/x-mathjax-config">
- MathJax.Hub.Config({
- TeX: { equationNumbers: { autoNumber: "AMS" } }
- });
- </script>
- <script>
- // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
- // We could use "//cdn.mathjax...", but that won't support "file://".
- (function(d, script) {
- script = d.createElement('script');
- script.type = 'text/javascript';
- script.async = true;
- script.onload = function(){
- MathJax.Hub.Config({
- tex2jax: {
- inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
- displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
- processEscapes: true,
- skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
- }
- });
- };
- script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
- 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
- d.getElementsByTagName('head')[0].appendChild(script);
- }(document));
- </script>
- </body>
-</html>