summaryrefslogtreecommitdiff
path: root/site/docs/1.5.0/mllib-optimization.html
diff options
context:
space:
mode:
authorReynold Xin <rxin@apache.org>2015-09-17 22:11:21 +0000
committerReynold Xin <rxin@apache.org>2015-09-17 22:11:21 +0000
commit6f57b0c45a7d1b6255067c6e9bc549baa491acac (patch)
treedbf7d7a7700e9e6bad3c8289ab831bc9c2c20d62 /site/docs/1.5.0/mllib-optimization.html
parentee9ffe89d608e7640a2487406b618d27e58026d6 (diff)
downloadspark-website-6f57b0c45a7d1b6255067c6e9bc549baa491acac.tar.gz
spark-website-6f57b0c45a7d1b6255067c6e9bc549baa491acac.tar.bz2
spark-website-6f57b0c45a7d1b6255067c6e9bc549baa491acac.zip
add 1.5.0 back
Diffstat (limited to 'site/docs/1.5.0/mllib-optimization.html')
-rw-r--r--site/docs/1.5.0/mllib-optimization.html558
1 files changed, 558 insertions, 0 deletions
diff --git a/site/docs/1.5.0/mllib-optimization.html b/site/docs/1.5.0/mllib-optimization.html
new file mode 100644
index 000000000..317181f56
--- /dev/null
+++ b/site/docs/1.5.0/mllib-optimization.html
@@ -0,0 +1,558 @@
+<!DOCTYPE html>
+<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
+<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
+<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
+ <head>
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+ <title>Optimization - MLlib - Spark 1.5.0 Documentation</title>
+
+
+
+
+ <link rel="stylesheet" href="css/bootstrap.min.css">
+ <style>
+ body {
+ padding-top: 60px;
+ padding-bottom: 40px;
+ }
+ </style>
+ <meta name="viewport" content="width=device-width">
+ <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
+ <link rel="stylesheet" href="css/main.css">
+
+ <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
+
+ <link rel="stylesheet" href="css/pygments-default.css">
+
+
+ <!-- Google analytics script -->
+ <script type="text/javascript">
+ var _gaq = _gaq || [];
+ _gaq.push(['_setAccount', 'UA-32518208-2']);
+ _gaq.push(['_trackPageview']);
+
+ (function() {
+ var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+ ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+ var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+ })();
+ </script>
+
+
+ </head>
+ <body>
+ <!--[if lt IE 7]>
+ <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
+ <![endif]-->
+
+ <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
+
+ <div class="navbar navbar-fixed-top" id="topbar">
+ <div class="navbar-inner">
+ <div class="container">
+ <div class="brand"><a href="index.html">
+ <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.5.0</span>
+ </div>
+ <ul class="nav">
+ <!--TODO(andyk): Add class="active" attribute to li some how.-->
+ <li><a href="index.html">Overview</a></li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="quick-start.html">Quick Start</a></li>
+ <li><a href="programming-guide.html">Spark Programming Guide</a></li>
+ <li class="divider"></li>
+ <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
+ <li><a href="sql-programming-guide.html">DataFrames and SQL</a></li>
+ <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
+ <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
+ <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
+ <li><a href="sparkr.html">SparkR (R on Spark)</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
+ <li><a href="api/java/index.html">Java</a></li>
+ <li><a href="api/python/index.html">Python</a></li>
+ <li><a href="api/R/index.html">R</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="cluster-overview.html">Overview</a></li>
+ <li><a href="submitting-applications.html">Submitting Applications</a></li>
+ <li class="divider"></li>
+ <li><a href="spark-standalone.html">Spark Standalone</a></li>
+ <li><a href="running-on-mesos.html">Mesos</a></li>
+ <li><a href="running-on-yarn.html">YARN</a></li>
+ <li class="divider"></li>
+ <li><a href="ec2-scripts.html">Amazon EC2</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="configuration.html">Configuration</a></li>
+ <li><a href="monitoring.html">Monitoring</a></li>
+ <li><a href="tuning.html">Tuning Guide</a></li>
+ <li><a href="job-scheduling.html">Job Scheduling</a></li>
+ <li><a href="security.html">Security</a></li>
+ <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
+ <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li>
+ <li class="divider"></li>
+ <li><a href="building-spark.html">Building Spark</a></li>
+ <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
+ <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Supplemental+Spark+Projects">Supplemental Projects</a></li>
+ </ul>
+ </li>
+ </ul>
+ <!--<p class="navbar-text pull-right"><span class="version-text">v1.5.0</span></p>-->
+ </div>
+ </div>
+ </div>
+
+ <div class="container" id="content">
+
+ <h1 class="title"><a href="mllib-guide.html">MLlib</a> - Optimization</h1>
+
+
+ <ul id="markdown-toc">
+ <li><a href="#mathematical-description">Mathematical description</a> <ul>
+ <li><a href="#gradient-descent">Gradient descent</a></li>
+ <li><a href="#stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</a></li>
+ <li><a href="#update-schemes-for-distributed-sgd">Update schemes for distributed SGD</a></li>
+ <li><a href="#limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</a></li>
+ <li><a href="#choosing-an-optimization-method">Choosing an Optimization Method</a></li>
+ </ul>
+ </li>
+ <li><a href="#implementation-in-mllib">Implementation in MLlib</a> <ul>
+ <li><a href="#gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</a></li>
+ <li><a href="#l-bfgs">L-BFGS</a></li>
+ </ul>
+ </li>
+ <li><a href="#developers-notes">Developer&#8217;s notes</a></li>
+</ul>
+
+<p><code>\[
+\newcommand{\R}{\mathbb{R}}
+\newcommand{\E}{\mathbb{E}}
+\newcommand{\x}{\mathbf{x}}
+\newcommand{\y}{\mathbf{y}}
+\newcommand{\wv}{\mathbf{w}}
+\newcommand{\av}{\mathbf{\alpha}}
+\newcommand{\bv}{\mathbf{b}}
+\newcommand{\N}{\mathbb{N}}
+\newcommand{\id}{\mathbf{I}}
+\newcommand{\ind}{\mathbf{1}}
+\newcommand{\0}{\mathbf{0}}
+\newcommand{\unit}{\mathbf{e}}
+\newcommand{\one}{\mathbf{1}}
+\newcommand{\zero}{\mathbf{0}}
+\]</code></p>
+
+<h2 id="mathematical-description">Mathematical description</h2>
+
+<h3 id="gradient-descent">Gradient descent</h3>
+<p>The simplest method to solve optimization problems of the form <code>$\min_{\wv \in\R^d} \; f(\wv)$</code>
+is <a href="http://en.wikipedia.org/wiki/Gradient_descent">gradient descent</a>.
+Such first-order optimization methods (including gradient descent and stochastic variants
+thereof) are well-suited for large-scale and distributed computation.</p>
+
+<p>Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in
+the direction of steepest descent, which is the negative of the derivative (called the
+<a href="http://en.wikipedia.org/wiki/Gradient">gradient</a>) of the function at the current point, i.e., at
+the current parameter value.
+If the objective function <code>$f$</code> is not differentiable at all arguments, but still convex, then a
+<em>sub-gradient</em>
+is the natural generalization of the gradient, and assumes the role of the step direction.
+In any case, computing a gradient or sub-gradient of <code>$f$</code> is expensive &#8212; it requires a full
+pass through the complete dataset, in order to compute the contributions from all loss terms.</p>
+
+<h3 id="stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</h3>
+<p>Optimization problems whose objective function <code>$f$</code> is written as a sum are particularly
+suitable to be solved using <em>stochastic gradient descent (SGD)</em>.
+In our case, for the optimization formulations commonly used in <a href="mllib-classification-regression.html">supervised machine learning</a>,
+<code>\begin{equation}
+ f(\wv) :=
+ \lambda\, R(\wv) +
+ \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)
+ \label{eq:regPrimal}
+ \ .
+\end{equation}</code>
+this is especially natural, because the loss is written as an average of the individual losses
+coming from each datapoint.</p>
+
+<p>A stochastic subgradient is a randomized choice of a vector, such that in expectation, we obtain
+a true subgradient of the original objective function.
+Picking one datapoint <code>$i\in[1..n]$</code> uniformly at random, we obtain a stochastic subgradient of
+<code>$\eqref{eq:regPrimal}$</code>, with respect to <code>$\wv$</code> as follows:
+<code>\[
+f'_{\wv,i} := L'_{\wv,i} + \lambda\, R'_\wv \ ,
+\]</code>
+where <code>$L'_{\wv,i} \in \R^d$</code> is a subgradient of the part of the loss function determined by the
+<code>$i$</code>-th datapoint, that is <code>$L'_{\wv,i} \in \frac{\partial}{\partial \wv} L(\wv;\x_i,y_i)$</code>.
+Furthermore, <code>$R'_\wv$</code> is a subgradient of the regularizer <code>$R(\wv)$</code>, i.e. <code>$R'_\wv \in
+\frac{\partial}{\partial \wv} R(\wv)$</code>. The term <code>$R'_\wv$</code> does not depend on which random
+datapoint is picked.
+Clearly, in expectation over the random choice of <code>$i\in[1..n]$</code>, we have that <code>$f'_{\wv,i}$</code> is
+a subgradient of the original objective <code>$f$</code>, meaning that <code>$\E\left[f'_{\wv,i}\right] \in
+\frac{\partial}{\partial \wv} f(\wv)$</code>.</p>
+
+<p>Running SGD now simply becomes walking in the direction of the negative stochastic subgradient
+<code>$f'_{\wv,i}$</code>, that is
+<code>\begin{equation}\label{eq:SGDupdate}
+\wv^{(t+1)} := \wv^{(t)} - \gamma \; f'_{\wv,i} \ .
+\end{equation}</code>
+<strong>Step-size.</strong>
+The parameter <code>$\gamma$</code> is the step-size, which in the default implementation is chosen
+decreasing with the square root of the iteration counter, i.e. <code>$\gamma := \frac{s}{\sqrt{t}}$</code>
+in the <code>$t$</code>-th iteration, with the input parameter <code>$s=$ stepSize</code>. Note that selecting the best
+step-size for SGD methods can often be delicate in practice and is a topic of active research.</p>
+
+<p><strong>Gradients.</strong>
+A table of (sub)gradients of the machine learning methods implemented in MLlib, is available in
+the <a href="mllib-classification-regression.html">classification and regression</a> section.</p>
+
+<p><strong>Proximal Updates.</strong>
+As an alternative to just use the subgradient <code>$R'(\wv)$</code> of the regularizer in the step
+direction, an improved update for some cases can be obtained by using the proximal operator
+instead.
+For the L1-regularizer, the proximal operator is given by soft thresholding, as implemented in
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater">L1Updater</a>.</p>
+
+<h3 id="update-schemes-for-distributed-sgd">Update schemes for distributed SGD</h3>
+<p>The SGD implementation in
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent">GradientDescent</a> uses
+a simple (distributed) sampling of the data examples.
+We recall that the loss part of the optimization problem <code>$\eqref{eq:regPrimal}$</code> is
+<code>$\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)$</code>, and therefore <code>$\frac1n \sum_{i=1}^n L'_{\wv,i}$</code> would
+be the true (sub)gradient.
+Since this would require access to the full data set, the parameter <code>miniBatchFraction</code> specifies
+which fraction of the full data to use instead.
+The average of the gradients over this subset, i.e.
+<code>\[
+\frac1{|S|} \sum_{i\in S} L'_{\wv,i} \ ,
+\]</code>
+is a stochastic gradient. Here <code>$S$</code> is the sampled subset of size <code>$|S|=$ miniBatchFraction
+$\cdot n$</code>.</p>
+
+<p>In each iteration, the sampling over the distributed dataset
+(<a href="programming-guide.html#resilient-distributed-datasets-rdds">RDD</a>), as well as the
+computation of the sum of the partial results from each worker machine is performed by the
+standard spark routines.</p>
+
+<p>If the fraction of points <code>miniBatchFraction</code> is set to 1 (default), then the resulting step in
+each iteration is exact (sub)gradient descent. In this case there is no randomness and no
+variance in the used step directions.
+On the other extreme, if <code>miniBatchFraction</code> is chosen very small, such that only a single point
+is sampled, i.e. <code>$|S|=$ miniBatchFraction $\cdot n = 1$</code>, then the algorithm is equivalent to
+standard SGD. In that case, the step direction depends from the uniformly random sampling of the
+point.</p>
+
+<h3 id="limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</h3>
+<p><a href="http://en.wikipedia.org/wiki/Limited-memory_BFGS">L-BFGS</a> is an optimization
+algorithm in the family of quasi-Newton methods to solve the optimization problems of the form
+<code>$\min_{\wv \in\R^d} \; f(\wv)$</code>. The L-BFGS method approximates the objective function locally as a
+quadratic without evaluating the second partial derivatives of the objective function to construct the
+Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations, so there is no
+vertical scalability issue (the number of training features) when computing the Hessian matrix
+explicitly in Newton&#8217;s method. As a result, L-BFGS often achieves rapider convergence compared with
+other first-order optimization. </p>
+
+<h3 id="choosing-an-optimization-method">Choosing an Optimization Method</h3>
+
+<p><a href="mllib-linear-methods.html">Linear methods</a> use optimization internally, and some linear methods in MLlib support both SGD and L-BFGS.
+Different optimization methods can have different convergence guarantees depending on the properties of the objective function, and we cannot cover the literature here.
+In general, when L-BFGS is available, we recommend using it instead of SGD since L-BFGS tends to converge faster (in fewer iterations).</p>
+
+<h2 id="implementation-in-mllib">Implementation in MLlib</h2>
+
+<h3 id="gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</h3>
+<p>Gradient descent methods including stochastic subgradient descent (SGD) as
+included as a low-level primitive in <code>MLlib</code>, upon which various ML algorithms
+are developed, see the
+<a href="mllib-linear-methods.html">linear methods</a>
+section for example.</p>
+
+<p>The SGD class
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent">GradientDescent</a>
+sets the following parameters:</p>
+
+<ul>
+ <li><code>Gradient</code> is a class that computes the stochastic gradient of the function
+being optimized, i.e., with respect to a single training example, at the
+current parameter value. MLlib includes gradient classes for common loss
+functions, e.g., hinge, logistic, least-squares. The gradient class takes as
+input a training example, its label, and the current parameter value. </li>
+ <li><code>Updater</code> is a class that performs the actual gradient descent step, i.e.
+updating the weights in each iteration, for a given gradient of the loss part.
+The updater is also responsible to perform the update from the regularization
+part. MLlib includes updaters for cases without regularization, as well as
+L1 and L2 regularizers.</li>
+ <li><code>stepSize</code> is a scalar value denoting the initial step size for gradient
+descent. All updaters in MLlib use a step size at the t-th step equal to
+<code>stepSize $/ \sqrt{t}$</code>. </li>
+ <li><code>numIterations</code> is the number of iterations to run.</li>
+ <li><code>regParam</code> is the regularization parameter when using L1 or L2 regularization.</li>
+ <li><code>miniBatchFraction</code> is the fraction of the total data that is sampled in
+each iteration, to compute the gradient direction.
+ <ul>
+ <li>Sampling still requires a pass over the entire RDD, so decreasing <code>miniBatchFraction</code> may not speed up optimization much. Users will see the greatest speedup when the gradient is expensive to compute, for only the chosen samples are used for computing the gradient.</li>
+ </ul>
+ </li>
+</ul>
+
+<h3 id="l-bfgs">L-BFGS</h3>
+<p>L-BFGS is currently only a low-level optimization primitive in <code>MLlib</code>. If you want to use L-BFGS in various
+ML algorithms such as Linear Regression, and Logistic Regression, you have to pass the gradient of objective
+function, and updater into optimizer yourself instead of using the training APIs like
+<a href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD">LogisticRegressionWithSGD</a>.
+See the example below. It will be addressed in the next release. </p>
+
+<p>The L1 regularization by using
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater">L1Updater</a> will not work since the
+soft-thresholding logic in L1Updater is designed for gradient descent. See the developer&#8217;s note.</p>
+
+<p>The L-BFGS method
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.LBFGS">LBFGS.runLBFGS</a>
+has the following parameters:</p>
+
+<ul>
+ <li><code>Gradient</code> is a class that computes the gradient of the objective function
+being optimized, i.e., with respect to a single training example, at the
+current parameter value. MLlib includes gradient classes for common loss
+functions, e.g., hinge, logistic, least-squares. The gradient class takes as
+input a training example, its label, and the current parameter value. </li>
+ <li><code>Updater</code> is a class that computes the gradient and loss of objective function
+of the regularization part for L-BFGS. MLlib includes updaters for cases without
+regularization, as well as L2 regularizer. </li>
+ <li><code>numCorrections</code> is the number of corrections used in the L-BFGS update. 10 is
+recommended.</li>
+ <li><code>maxNumIterations</code> is the maximal number of iterations that L-BFGS can be run.</li>
+ <li><code>regParam</code> is the regularization parameter when using regularization.</li>
+ <li><code>convergenceTol</code> controls how much relative change is still allowed when L-BFGS
+is considered to converge. This must be nonnegative. Lower values are less tolerant and
+therefore generally cause more iterations to be run. This value looks at both average
+improvement and the norm of gradient inside <a href="https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala">Breeze LBFGS</a>.</li>
+</ul>
+
+<p>The <code>return</code> is a tuple containing two elements. The first element is a column matrix
+containing weights for every feature, and the second element is an array containing
+the loss computed for every iteration.</p>
+
+<p>Here is an example to train binary logistic regression with L2 regularization using
+L-BFGS optimizer. </p>
+
+<div class="codetabs">
+
+<div data-lang="scala">
+
+ <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.SparkContext</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.optimization.</span><span class="o">{</span><span class="nc">LBFGS</span><span class="o">,</span> <span class="nc">LogisticGradient</span><span class="o">,</span> <span class="nc">SquaredL2Updater</span><span class="o">}</span>
+
+<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">loadLibSVMFile</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;data/mllib/sample_libsvm_data.txt&quot;</span><span class="o">)</span>
+<span class="k">val</span> <span class="n">numFeatures</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">take</span><span class="o">(</span><span class="mi">1</span><span class="o">)(</span><span class="mi">0</span><span class="o">).</span><span class="n">features</span><span class="o">.</span><span class="n">size</span>
+
+<span class="c1">// Split data into training (60%) and test (40%).</span>
+<span class="k">val</span> <span class="n">splits</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">randomSplit</span><span class="o">(</span><span class="nc">Array</span><span class="o">(</span><span class="mf">0.6</span><span class="o">,</span> <span class="mf">0.4</span><span class="o">),</span> <span class="n">seed</span> <span class="k">=</span> <span class="mi">11L</span><span class="o">)</span>
+
+<span class="c1">// Append 1 into the training data as intercept.</span>
+<span class="k">val</span> <span class="n">training</span> <span class="k">=</span> <span class="n">splits</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="n">x</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">x</span><span class="o">.</span><span class="n">label</span><span class="o">,</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">appendBias</span><span class="o">(</span><span class="n">x</span><span class="o">.</span><span class="n">features</span><span class="o">))).</span><span class="n">cache</span><span class="o">()</span>
+
+<span class="k">val</span> <span class="n">test</span> <span class="k">=</span> <span class="n">splits</span><span class="o">(</span><span class="mi">1</span><span class="o">)</span>
+
+<span class="c1">// Run training algorithm to build the model</span>
+<span class="k">val</span> <span class="n">numCorrections</span> <span class="k">=</span> <span class="mi">10</span>
+<span class="k">val</span> <span class="n">convergenceTol</span> <span class="k">=</span> <span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span>
+<span class="k">val</span> <span class="n">maxNumIterations</span> <span class="k">=</span> <span class="mi">20</span>
+<span class="k">val</span> <span class="n">regParam</span> <span class="k">=</span> <span class="mf">0.1</span>
+<span class="k">val</span> <span class="n">initialWeightsWithIntercept</span> <span class="k">=</span> <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="k">new</span> <span class="nc">Array</span><span class="o">[</span><span class="kt">Double</span><span class="o">](</span><span class="n">numFeatures</span> <span class="o">+</span> <span class="mi">1</span><span class="o">))</span>
+
+<span class="k">val</span> <span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">,</span> <span class="n">loss</span><span class="o">)</span> <span class="k">=</span> <span class="nc">LBFGS</span><span class="o">.</span><span class="n">runLBFGS</span><span class="o">(</span>
+ <span class="n">training</span><span class="o">,</span>
+ <span class="k">new</span> <span class="nc">LogisticGradient</span><span class="o">(),</span>
+ <span class="k">new</span> <span class="nc">SquaredL2Updater</span><span class="o">(),</span>
+ <span class="n">numCorrections</span><span class="o">,</span>
+ <span class="n">convergenceTol</span><span class="o">,</span>
+ <span class="n">maxNumIterations</span><span class="o">,</span>
+ <span class="n">regParam</span><span class="o">,</span>
+ <span class="n">initialWeightsWithIntercept</span><span class="o">)</span>
+
+<span class="k">val</span> <span class="n">model</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LogisticRegressionModel</span><span class="o">(</span>
+ <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">toArray</span><span class="o">.</span><span class="n">slice</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">)),</span>
+ <span class="n">weightsWithIntercept</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">))</span>
+
+<span class="c1">// Clear the default threshold.</span>
+<span class="n">model</span><span class="o">.</span><span class="n">clearThreshold</span><span class="o">()</span>
+
+<span class="c1">// Compute raw scores on the test set.</span>
+<span class="k">val</span> <span class="n">scoreAndLabels</span> <span class="k">=</span> <span class="n">test</span><span class="o">.</span><span class="n">map</span> <span class="o">{</span> <span class="n">point</span> <span class="k">=&gt;</span>
+ <span class="k">val</span> <span class="n">score</span> <span class="k">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="o">(</span><span class="n">point</span><span class="o">.</span><span class="n">features</span><span class="o">)</span>
+ <span class="o">(</span><span class="n">score</span><span class="o">,</span> <span class="n">point</span><span class="o">.</span><span class="n">label</span><span class="o">)</span>
+<span class="o">}</span>
+
+<span class="c1">// Get evaluation metrics.</span>
+<span class="k">val</span> <span class="n">metrics</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">BinaryClassificationMetrics</span><span class="o">(</span><span class="n">scoreAndLabels</span><span class="o">)</span>
+<span class="k">val</span> <span class="n">auROC</span> <span class="k">=</span> <span class="n">metrics</span><span class="o">.</span><span class="n">areaUnderROC</span><span class="o">()</span>
+
+<span class="n">println</span><span class="o">(</span><span class="s">&quot;Loss of each step in training process&quot;</span><span class="o">)</span>
+<span class="n">loss</span><span class="o">.</span><span class="n">foreach</span><span class="o">(</span><span class="n">println</span><span class="o">)</span>
+<span class="n">println</span><span class="o">(</span><span class="s">&quot;Area under ROC = &quot;</span> <span class="o">+</span> <span class="n">auROC</span><span class="o">)</span></code></pre></div>
+
+ </div>
+
+<div data-lang="java">
+
+ <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">java.util.Arrays</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">java.util.Random</span><span class="o">;</span>
+
+<span class="kn">import</span> <span class="nn">scala.Tuple2</span><span class="o">;</span>
+
+<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.*</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.function.Function</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vector</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.optimization.*</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.regression.LabeledPoint</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.SparkConf</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.spark.SparkContext</span><span class="o">;</span>
+
+<span class="kd">public</span> <span class="kd">class</span> <span class="nc">LBFGSExample</span> <span class="o">{</span>
+ <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="o">{</span>
+ <span class="n">SparkConf</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SparkConf</span><span class="o">().</span><span class="na">setAppName</span><span class="o">(</span><span class="s">&quot;L-BFGS Example&quot;</span><span class="o">);</span>
+ <span class="n">SparkContext</span> <span class="n">sc</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SparkContext</span><span class="o">(</span><span class="n">conf</span><span class="o">);</span>
+ <span class="n">String</span> <span class="n">path</span> <span class="o">=</span> <span class="s">&quot;data/mllib/sample_libsvm_data.txt&quot;</span><span class="o">;</span>
+ <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">LabeledPoint</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="n">MLUtils</span><span class="o">.</span><span class="na">loadLibSVMFile</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="n">path</span><span class="o">).</span><span class="na">toJavaRDD</span><span class="o">();</span>
+ <span class="kt">int</span> <span class="n">numFeatures</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">take</span><span class="o">(</span><span class="mi">1</span><span class="o">).</span><span class="na">get</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="na">features</span><span class="o">().</span><span class="na">size</span><span class="o">();</span>
+
+ <span class="c1">// Split initial RDD into two... [60% training data, 40% testing data].</span>
+ <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">LabeledPoint</span><span class="o">&gt;</span> <span class="n">trainingInit</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">sample</span><span class="o">(</span><span class="kc">false</span><span class="o">,</span> <span class="mf">0.6</span><span class="o">,</span> <span class="mi">11L</span><span class="o">);</span>
+ <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">LabeledPoint</span><span class="o">&gt;</span> <span class="n">test</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">subtract</span><span class="o">(</span><span class="n">trainingInit</span><span class="o">);</span>
+
+ <span class="c1">// Append 1 into the training data as intercept.</span>
+ <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;&gt;</span> <span class="n">training</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
+ <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">LabeledPoint</span><span class="o">,</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+ <span class="kd">public</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;</span> <span class="nf">call</span><span class="o">(</span><span class="n">LabeledPoint</span> <span class="n">p</span><span class="o">)</span> <span class="o">{</span>
+ <span class="k">return</span> <span class="k">new</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Vector</span><span class="o">&gt;(</span><span class="n">p</span><span class="o">.</span><span class="na">label</span><span class="o">(),</span> <span class="n">MLUtils</span><span class="o">.</span><span class="na">appendBias</span><span class="o">(</span><span class="n">p</span><span class="o">.</span><span class="na">features</span><span class="o">()));</span>
+ <span class="o">}</span>
+ <span class="o">});</span>
+ <span class="n">training</span><span class="o">.</span><span class="na">cache</span><span class="o">();</span>
+
+ <span class="c1">// Run training algorithm to build the model.</span>
+ <span class="kt">int</span> <span class="n">numCorrections</span> <span class="o">=</span> <span class="mi">10</span><span class="o">;</span>
+ <span class="kt">double</span> <span class="n">convergenceTol</span> <span class="o">=</span> <span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span><span class="o">;</span>
+ <span class="kt">int</span> <span class="n">maxNumIterations</span> <span class="o">=</span> <span class="mi">20</span><span class="o">;</span>
+ <span class="kt">double</span> <span class="n">regParam</span> <span class="o">=</span> <span class="mf">0.1</span><span class="o">;</span>
+ <span class="n">Vector</span> <span class="n">initialWeightsWithIntercept</span> <span class="o">=</span> <span class="n">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="k">new</span> <span class="kt">double</span><span class="o">[</span><span class="n">numFeatures</span> <span class="o">+</span> <span class="mi">1</span><span class="o">]);</span>
+
+ <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Vector</span><span class="o">,</span> <span class="kt">double</span><span class="o">[]&gt;</span> <span class="n">result</span> <span class="o">=</span> <span class="n">LBFGS</span><span class="o">.</span><span class="na">runLBFGS</span><span class="o">(</span>
+ <span class="n">training</span><span class="o">.</span><span class="na">rdd</span><span class="o">(),</span>
+ <span class="k">new</span> <span class="nf">LogisticGradient</span><span class="o">(),</span>
+ <span class="k">new</span> <span class="nf">SquaredL2Updater</span><span class="o">(),</span>
+ <span class="n">numCorrections</span><span class="o">,</span>
+ <span class="n">convergenceTol</span><span class="o">,</span>
+ <span class="n">maxNumIterations</span><span class="o">,</span>
+ <span class="n">regParam</span><span class="o">,</span>
+ <span class="n">initialWeightsWithIntercept</span><span class="o">);</span>
+ <span class="n">Vector</span> <span class="n">weightsWithIntercept</span> <span class="o">=</span> <span class="n">result</span><span class="o">.</span><span class="na">_1</span><span class="o">();</span>
+ <span class="kt">double</span><span class="o">[]</span> <span class="n">loss</span> <span class="o">=</span> <span class="n">result</span><span class="o">.</span><span class="na">_2</span><span class="o">();</span>
+
+ <span class="kd">final</span> <span class="n">LogisticRegressionModel</span> <span class="n">model</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LogisticRegressionModel</span><span class="o">(</span>
+ <span class="n">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="n">Arrays</span><span class="o">.</span><span class="na">copyOf</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">toArray</span><span class="o">(),</span> <span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">size</span><span class="o">()</span> <span class="o">-</span> <span class="mi">1</span><span class="o">)),</span>
+ <span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">toArray</span><span class="o">())[</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">size</span><span class="o">()</span> <span class="o">-</span> <span class="mi">1</span><span class="o">]);</span>
+
+ <span class="c1">// Clear the default threshold.</span>
+ <span class="n">model</span><span class="o">.</span><span class="na">clearThreshold</span><span class="o">();</span>
+
+ <span class="c1">// Compute raw scores on the test set.</span>
+ <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">scoreAndLabels</span> <span class="o">=</span> <span class="n">test</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
+ <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">LabeledPoint</span><span class="o">,</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+ <span class="kd">public</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="nf">call</span><span class="o">(</span><span class="n">LabeledPoint</span> <span class="n">p</span><span class="o">)</span> <span class="o">{</span>
+ <span class="n">Double</span> <span class="n">score</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="na">predict</span><span class="o">(</span><span class="n">p</span><span class="o">.</span><span class="na">features</span><span class="o">());</span>
+ <span class="k">return</span> <span class="k">new</span> <span class="n">Tuple2</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;(</span><span class="n">score</span><span class="o">,</span> <span class="n">p</span><span class="o">.</span><span class="na">label</span><span class="o">());</span>
+ <span class="o">}</span>
+ <span class="o">});</span>
+
+ <span class="c1">// Get evaluation metrics.</span>
+ <span class="n">BinaryClassificationMetrics</span> <span class="n">metrics</span> <span class="o">=</span>
+ <span class="k">new</span> <span class="nf">BinaryClassificationMetrics</span><span class="o">(</span><span class="n">scoreAndLabels</span><span class="o">.</span><span class="na">rdd</span><span class="o">());</span>
+ <span class="kt">double</span> <span class="n">auROC</span> <span class="o">=</span> <span class="n">metrics</span><span class="o">.</span><span class="na">areaUnderROC</span><span class="o">();</span>
+
+ <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Loss of each step in training process&quot;</span><span class="o">);</span>
+ <span class="k">for</span> <span class="o">(</span><span class="kt">double</span> <span class="n">l</span> <span class="o">:</span> <span class="n">loss</span><span class="o">)</span>
+ <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">l</span><span class="o">);</span>
+ <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Area under ROC = &quot;</span> <span class="o">+</span> <span class="n">auROC</span><span class="o">);</span>
+ <span class="o">}</span>
+<span class="o">}</span></code></pre></div>
+
+ </div>
+</div>
+
+<h2 id="developers-notes">Developer&#8217;s notes</h2>
+
+<p>Since the Hessian is constructed approximately from previous gradient evaluations,
+the objective function can not be changed during the optimization process.
+As a result, Stochastic L-BFGS will not work naively by just using miniBatch;
+therefore, we don&#8217;t provide this until we have better understanding.</p>
+
+<p><code>Updater</code> is a class originally designed for gradient decent which computes
+the actual gradient descent step. However, we&#8217;re able to take the gradient and
+loss of objective function of regularization for L-BFGS by ignoring the part of logic
+only for gradient decent such as adaptive step size stuff. We will refactorize
+this into regularizer to replace updater to separate the logic between
+regularization and step update later. </p>
+
+
+ </div> <!-- /container -->
+
+ <script src="js/vendor/jquery-1.8.0.min.js"></script>
+ <script src="js/vendor/bootstrap.min.js"></script>
+ <script src="js/vendor/anchor.min.js"></script>
+ <script src="js/main.js"></script>
+
+ <!-- MathJax Section -->
+ <script type="text/x-mathjax-config">
+ MathJax.Hub.Config({
+ TeX: { equationNumbers: { autoNumber: "AMS" } }
+ });
+ </script>
+ <script>
+ // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
+ // We could use "//cdn.mathjax...", but that won't support "file://".
+ (function(d, script) {
+ script = d.createElement('script');
+ script.type = 'text/javascript';
+ script.async = true;
+ script.onload = function(){
+ MathJax.Hub.Config({
+ tex2jax: {
+ inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
+ displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
+ processEscapes: true,
+ skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+ }
+ });
+ };
+ script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
+ 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
+ d.getElementsByTagName('head')[0].appendChild(script);
+ }(document));
+ </script>
+ </body>
+</html>