summaryrefslogtreecommitdiff
path: root/site/docs/1.0.0/mllib-optimization.html
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@apache.org>2014-05-30 08:55:36 +0000
committerPatrick Wendell <pwendell@apache.org>2014-05-30 08:55:36 +0000
commit66fb4b11cbae79d9044b2bbf2e53351642a58ff5 (patch)
tree2a3238ae425816d2296c2972987e56b3e949ea0b /site/docs/1.0.0/mllib-optimization.html
parent3936fd5d223549bfa06bd6eeba6becfc88daee52 (diff)
downloadspark-website-66fb4b11cbae79d9044b2bbf2e53351642a58ff5.tar.gz
spark-website-66fb4b11cbae79d9044b2bbf2e53351642a58ff5.tar.bz2
spark-website-66fb4b11cbae79d9044b2bbf2e53351642a58ff5.zip
Docs for Spark 1.0.0
Diffstat (limited to 'site/docs/1.0.0/mllib-optimization.html')
-rw-r--r--site/docs/1.0.0/mllib-optimization.html428
1 files changed, 428 insertions, 0 deletions
diff --git a/site/docs/1.0.0/mllib-optimization.html b/site/docs/1.0.0/mllib-optimization.html
new file mode 100644
index 000000000..44483f94f
--- /dev/null
+++ b/site/docs/1.0.0/mllib-optimization.html
@@ -0,0 +1,428 @@
+<!DOCTYPE html>
+<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
+<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
+<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
+ <head>
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+ <title>Optimization - MLlib - Spark 1.0.0 Documentation</title>
+ <meta name="description" content="">
+
+
+
+ <link rel="stylesheet" href="css/bootstrap.min.css">
+ <style>
+ body {
+ padding-top: 60px;
+ padding-bottom: 40px;
+ }
+ </style>
+ <meta name="viewport" content="width=device-width">
+ <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
+ <link rel="stylesheet" href="css/main.css">
+
+ <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
+
+ <link rel="stylesheet" href="css/pygments-default.css">
+
+
+
+ </head>
+ <body>
+ <!--[if lt IE 7]>
+ <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
+ <![endif]-->
+
+ <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
+
+ <div class="navbar navbar-fixed-top" id="topbar">
+ <div class="navbar-inner">
+ <div class="container">
+ <div class="brand"><a href="index.html">
+ <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.0.0</span>
+ </div>
+ <ul class="nav">
+ <!--TODO(andyk): Add class="active" attribute to li some how.-->
+ <li><a href="index.html">Overview</a></li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="quick-start.html">Quick Start</a></li>
+ <li><a href="programming-guide.html">Spark Programming Guide</a></li>
+ <li class="divider"></li>
+ <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
+ <li><a href="sql-programming-guide.html">Spark SQL</a></li>
+ <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
+ <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
+ <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="api/scala/index.html#org.apache.spark.package">Scaladoc</a></li>
+ <li><a href="api/java/index.html">Javadoc</a></li>
+ <li><a href="api/python/index.html">Python API</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="cluster-overview.html">Overview</a></li>
+ <li><a href="submitting-applications.html">Submitting Applications</a></li>
+ <li class="divider"></li>
+ <li><a href="ec2-scripts.html">Amazon EC2</a></li>
+ <li><a href="spark-standalone.html">Standalone Mode</a></li>
+ <li><a href="running-on-mesos.html">Mesos</a></li>
+ <li><a href="running-on-yarn.html">YARN</a></li>
+ </ul>
+ </li>
+
+ <li class="dropdown">
+ <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ <li><a href="configuration.html">Configuration</a></li>
+ <li><a href="monitoring.html">Monitoring</a></li>
+ <li><a href="tuning.html">Tuning Guide</a></li>
+ <li><a href="job-scheduling.html">Job Scheduling</a></li>
+ <li><a href="security.html">Security</a></li>
+ <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
+ <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li>
+ <li class="divider"></li>
+ <li><a href="building-with-maven.html">Building Spark with Maven</a></li>
+ <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
+ </ul>
+ </li>
+ </ul>
+ <!--<p class="navbar-text pull-right"><span class="version-text">v1.0.0</span></p>-->
+ </div>
+ </div>
+ </div>
+
+ <div class="container" id="content">
+
+ <h1 class="title"><a href="mllib-guide.html">MLlib</a> - Optimization</h1>
+
+
+ <ul id="markdown-toc">
+ <li><a href="#mathematical-description">Mathematical description</a> <ul>
+ <li><a href="#gradient-descent">Gradient descent</a></li>
+ <li><a href="#stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</a></li>
+ <li><a href="#update-schemes-for-distributed-sgd">Update schemes for distributed SGD</a></li>
+ <li><a href="#limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</a></li>
+ </ul>
+ </li>
+ <li><a href="#implementation-in-mllib">Implementation in MLlib</a> <ul>
+ <li><a href="#gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</a></li>
+ <li><a href="#l-bfgs">L-BFGS</a> <ul>
+ <li><a href="#developers-note">Developer&#8217;s note</a></li>
+ </ul>
+ </li>
+ </ul>
+ </li>
+</ul>
+
+<p><code>\[
+\newcommand{\R}{\mathbb{R}}
+\newcommand{\E}{\mathbb{E}}
+\newcommand{\x}{\mathbf{x}}
+\newcommand{\y}{\mathbf{y}}
+\newcommand{\wv}{\mathbf{w}}
+\newcommand{\av}{\mathbf{\alpha}}
+\newcommand{\bv}{\mathbf{b}}
+\newcommand{\N}{\mathbb{N}}
+\newcommand{\id}{\mathbf{I}}
+\newcommand{\ind}{\mathbf{1}}
+\newcommand{\0}{\mathbf{0}}
+\newcommand{\unit}{\mathbf{e}}
+\newcommand{\one}{\mathbf{1}}
+\newcommand{\zero}{\mathbf{0}}
+\]</code></p>
+
+<h2 id="mathematical-description">Mathematical description</h2>
+
+<h3 id="gradient-descent">Gradient descent</h3>
+<p>The simplest method to solve optimization problems of the form <code>$\min_{\wv \in\R^d} \; f(\wv)$</code>
+is <a href="http://en.wikipedia.org/wiki/Gradient_descent">gradient descent</a>.
+Such first-order optimization methods (including gradient descent and stochastic variants
+thereof) are well-suited for large-scale and distributed computation.</p>
+
+<p>Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in
+the direction of steepest descent, which is the negative of the derivative (called the
+<a href="http://en.wikipedia.org/wiki/Gradient">gradient</a>) of the function at the current point, i.e., at
+the current parameter value.
+If the objective function <code>$f$</code> is not differentiable at all arguments, but still convex, then a
+<em>sub-gradient</em>
+is the natural generalization of the gradient, and assumes the role of the step direction.
+In any case, computing a gradient or sub-gradient of <code>$f$</code> is expensive &#8212; it requires a full
+pass through the complete dataset, in order to compute the contributions from all loss terms.</p>
+
+<h3 id="stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</h3>
+<p>Optimization problems whose objective function <code>$f$</code> is written as a sum are particularly
+suitable to be solved using <em>stochastic gradient descent (SGD)</em>.
+In our case, for the optimization formulations commonly used in <a href="mllib-classification-regression.html">supervised machine learning</a>,
+<code>\begin{equation}
+ f(\wv) :=
+ \lambda\, R(\wv) +
+ \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)
+ \label{eq:regPrimal}
+ \ .
+\end{equation}</code>
+this is especially natural, because the loss is written as an average of the individual losses
+coming from each datapoint.</p>
+
+<p>A stochastic subgradient is a randomized choice of a vector, such that in expectation, we obtain
+a true subgradient of the original objective function.
+Picking one datapoint <code>$i\in[1..n]$</code> uniformly at random, we obtain a stochastic subgradient of
+<code>$\eqref{eq:regPrimal}$</code>, with respect to <code>$\wv$</code> as follows:
+<code>\[
+f'_{\wv,i} := L'_{\wv,i} + \lambda\, R'_\wv \ ,
+\]</code>
+where <code>$L'_{\wv,i} \in \R^d$</code> is a subgradient of the part of the loss function determined by the
+<code>$i$</code>-th datapoint, that is <code>$L'_{\wv,i} \in \frac{\partial}{\partial \wv} L(\wv;\x_i,y_i)$</code>.
+Furthermore, <code>$R'_\wv$</code> is a subgradient of the regularizer <code>$R(\wv)$</code>, i.e. <code>$R'_\wv \in
+\frac{\partial}{\partial \wv} R(\wv)$</code>. The term <code>$R'_\wv$</code> does not depend on which random
+datapoint is picked.
+Clearly, in expectation over the random choice of <code>$i\in[1..n]$</code>, we have that <code>$f'_{\wv,i}$</code> is
+a subgradient of the original objective <code>$f$</code>, meaning that <code>$\E\left[f'_{\wv,i}\right] \in
+\frac{\partial}{\partial \wv} f(\wv)$</code>.</p>
+
+<p>Running SGD now simply becomes walking in the direction of the negative stochastic subgradient
+<code>$f'_{\wv,i}$</code>, that is
+<code>\begin{equation}\label{eq:SGDupdate}
+\wv^{(t+1)} := \wv^{(t)} - \gamma \; f'_{\wv,i} \ .
+\end{equation}</code>
+<strong>Step-size.</strong>
+The parameter <code>$\gamma$</code> is the step-size, which in the default implementation is chosen
+decreasing with the square root of the iteration counter, i.e. <code>$\gamma := \frac{s}{\sqrt{t}}$</code>
+in the <code>$t$</code>-th iteration, with the input parameter <code>$s=$ stepSize</code>. Note that selecting the best
+step-size for SGD methods can often be delicate in practice and is a topic of active research.</p>
+
+<p><strong>Gradients.</strong>
+A table of (sub)gradients of the machine learning methods implemented in MLlib, is available in
+the <a href="mllib-classification-regression.html">classification and regression</a> section.</p>
+
+<p><strong>Proximal Updates.</strong>
+As an alternative to just use the subgradient <code>$R'(\wv)$</code> of the regularizer in the step
+direction, an improved update for some cases can be obtained by using the proximal operator
+instead.
+For the L1-regularizer, the proximal operator is given by soft thresholding, as implemented in
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater">L1Updater</a>.</p>
+
+<h3 id="update-schemes-for-distributed-sgd">Update schemes for distributed SGD</h3>
+<p>The SGD implementation in
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent">GradientDescent</a> uses
+a simple (distributed) sampling of the data examples.
+We recall that the loss part of the optimization problem <code>$\eqref{eq:regPrimal}$</code> is
+<code>$\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)$</code>, and therefore <code>$\frac1n \sum_{i=1}^n L'_{\wv,i}$</code> would
+be the true (sub)gradient.
+Since this would require access to the full data set, the parameter <code>miniBatchFraction</code> specifies
+which fraction of the full data to use instead.
+The average of the gradients over this subset, i.e.
+<code>\[
+\frac1{|S|} \sum_{i\in S} L'_{\wv,i} \ ,
+\]</code>
+is a stochastic gradient. Here <code>$S$</code> is the sampled subset of size <code>$|S|=$ miniBatchFraction
+$\cdot n$</code>.</p>
+
+<p>In each iteration, the sampling over the distributed dataset
+(<a href="programming-guide.html#resilient-distributed-datasets-rdds">RDD</a>), as well as the
+computation of the sum of the partial results from each worker machine is performed by the
+standard spark routines.</p>
+
+<p>If the fraction of points <code>miniBatchFraction</code> is set to 1 (default), then the resulting step in
+each iteration is exact (sub)gradient descent. In this case there is no randomness and no
+variance in the used step directions.
+On the other extreme, if <code>miniBatchFraction</code> is chosen very small, such that only a single point
+is sampled, i.e. <code>$|S|=$ miniBatchFraction $\cdot n = 1$</code>, then the algorithm is equivalent to
+standard SGD. In that case, the step direction depends from the uniformly random sampling of the
+point.</p>
+
+<h3 id="limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</h3>
+<p><a href="http://en.wikipedia.org/wiki/Limited-memory_BFGS">L-BFGS</a> is an optimization
+algorithm in the family of quasi-Newton methods to solve the optimization problems of the form
+<code>$\min_{\wv \in\R^d} \; f(\wv)$</code>. The L-BFGS method approximates the objective function locally as a
+quadratic without evaluating the second partial derivatives of the objective function to construct the
+Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations, so there is no
+vertical scalability issue (the number of training features) when computing the Hessian matrix
+explicitly in Newton&#8217;s method. As a result, L-BFGS often achieves rapider convergence compared with
+other first-order optimization. </p>
+
+<h2 id="implementation-in-mllib">Implementation in MLlib</h2>
+
+<h3 id="gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</h3>
+<p>Gradient descent methods including stochastic subgradient descent (SGD) as
+included as a low-level primitive in <code>MLlib</code>, upon which various ML algorithms
+are developed, see the
+<a href="mllib-linear-methods.html">linear methods</a>
+section for example.</p>
+
+<p>The SGD method
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent">GradientDescent.runMiniBatchSGD</a>
+has the following parameters:</p>
+
+<ul>
+ <li><code>Gradient</code> is a class that computes the stochastic gradient of the function
+being optimized, i.e., with respect to a single training example, at the
+current parameter value. MLlib includes gradient classes for common loss
+functions, e.g., hinge, logistic, least-squares. The gradient class takes as
+input a training example, its label, and the current parameter value. </li>
+ <li><code>Updater</code> is a class that performs the actual gradient descent step, i.e.
+updating the weights in each iteration, for a given gradient of the loss part.
+The updater is also responsible to perform the update from the regularization
+part. MLlib includes updaters for cases without regularization, as well as
+L1 and L2 regularizers.</li>
+ <li><code>stepSize</code> is a scalar value denoting the initial step size for gradient
+descent. All updaters in MLlib use a step size at the t-th step equal to
+<code>stepSize $/ \sqrt{t}$</code>. </li>
+ <li><code>numIterations</code> is the number of iterations to run.</li>
+ <li><code>regParam</code> is the regularization parameter when using L1 or L2 regularization.</li>
+ <li><code>miniBatchFraction</code> is the fraction of the total data that is sampled in
+each iteration, to compute the gradient direction.</li>
+</ul>
+
+<p>Available algorithms for gradient descent:</p>
+
+<ul>
+ <li><a href="api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent">GradientDescent.runMiniBatchSGD</a></li>
+</ul>
+
+<h3 id="l-bfgs">L-BFGS</h3>
+<p>L-BFGS is currently only a low-level optimization primitive in <code>MLlib</code>. If you want to use L-BFGS in various
+ML algorithms such as Linear Regression, and Logistic Regression, you have to pass the gradient of objective
+function, and updater into optimizer yourself instead of using the training APIs like
+<a href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD">LogisticRegressionWithSGD</a>.
+See the example below. It will be addressed in the next release. </p>
+
+<p>The L1 regularization by using
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater">L1Updater</a> will not work since the
+soft-thresholding logic in L1Updater is designed for gradient descent. See the developer&#8217;s note.</p>
+
+<p>The L-BFGS method
+<a href="api/scala/index.html#org.apache.spark.mllib.optimization.LBFGS">LBFGS.runLBFGS</a>
+has the following parameters:</p>
+
+<ul>
+ <li><code>Gradient</code> is a class that computes the gradient of the objective function
+being optimized, i.e., with respect to a single training example, at the
+current parameter value. MLlib includes gradient classes for common loss
+functions, e.g., hinge, logistic, least-squares. The gradient class takes as
+input a training example, its label, and the current parameter value. </li>
+ <li><code>Updater</code> is a class that computes the gradient and loss of objective function
+of the regularization part for L-BFGS. MLlib includes updaters for cases without
+regularization, as well as L2 regularizer. </li>
+ <li><code>numCorrections</code> is the number of corrections used in the L-BFGS update. 10 is
+recommended.</li>
+ <li><code>maxNumIterations</code> is the maximal number of iterations that L-BFGS can be run.</li>
+ <li><code>regParam</code> is the regularization parameter when using regularization.</li>
+</ul>
+
+<p>The <code>return</code> is a tuple containing two elements. The first element is a column matrix
+containing weights for every feature, and the second element is an array containing
+the loss computed for every iteration.</p>
+
+<p>Here is an example to train binary logistic regression with L2 regularization using
+L-BFGS optimizer. </p>
+
+<div class="highlight"><pre><code class="scala"><span class="k">import</span> <span class="nn">org.apache.spark.SparkContext</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span>
+<span class="k">import</span> <span class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span>
+
+<span class="k">val</span> <span class="n">data</span> <span class="k">=</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">loadLibSVMFile</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">&quot;mllib/data/sample_libsvm_data.txt&quot;</span><span class="o">)</span>
+<span class="k">val</span> <span class="n">numFeatures</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">take</span><span class="o">(</span><span class="mi">1</span><span class="o">)(</span><span class="mi">0</span><span class="o">).</span><span class="n">features</span><span class="o">.</span><span class="n">size</span>
+
+<span class="c1">// Split data into training (60%) and test (40%).</span>
+<span class="k">val</span> <span class="n">splits</span> <span class="k">=</span> <span class="n">data</span><span class="o">.</span><span class="n">randomSplit</span><span class="o">(</span><span class="nc">Array</span><span class="o">(</span><span class="mf">0.6</span><span class="o">,</span> <span class="mf">0.4</span><span class="o">),</span> <span class="n">seed</span> <span class="k">=</span> <span class="mi">11L</span><span class="o">)</span>
+
+<span class="c1">// Append 1 into the training data as intercept.</span>
+<span class="k">val</span> <span class="n">training</span> <span class="k">=</span> <span class="n">splits</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="n">map</span><span class="o">(</span><span class="n">x</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">x</span><span class="o">.</span><span class="n">label</span><span class="o">,</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">appendBias</span><span class="o">(</span><span class="n">x</span><span class="o">.</span><span class="n">features</span><span class="o">))).</span><span class="n">cache</span><span class="o">()</span>
+
+<span class="k">val</span> <span class="n">test</span> <span class="k">=</span> <span class="n">splits</span><span class="o">(</span><span class="mi">1</span><span class="o">)</span>
+
+<span class="c1">// Run training algorithm to build the model</span>
+<span class="k">val</span> <span class="n">numCorrections</span> <span class="k">=</span> <span class="mi">10</span>
+<span class="k">val</span> <span class="n">convergenceTol</span> <span class="k">=</span> <span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span>
+<span class="k">val</span> <span class="n">maxNumIterations</span> <span class="k">=</span> <span class="mi">20</span>
+<span class="k">val</span> <span class="n">regParam</span> <span class="k">=</span> <span class="mf">0.1</span>
+<span class="k">val</span> <span class="n">initialWeightsWithIntercept</span> <span class="k">=</span> <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="k">new</span> <span class="nc">Array</span><span class="o">[</span><span class="kt">Double</span><span class="o">](</span><span class="n">numFeatures</span> <span class="o">+</span> <span class="mi">1</span><span class="o">))</span>
+
+<span class="k">val</span> <span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">,</span> <span class="n">loss</span><span class="o">)</span> <span class="k">=</span> <span class="nc">LBFGS</span><span class="o">.</span><span class="n">runLBFGS</span><span class="o">(</span>
+ <span class="n">training</span><span class="o">,</span>
+ <span class="k">new</span> <span class="nc">LogisticGradient</span><span class="o">(),</span>
+ <span class="k">new</span> <span class="nc">SquaredL2Updater</span><span class="o">(),</span>
+ <span class="n">numCorrections</span><span class="o">,</span>
+ <span class="n">convergenceTol</span><span class="o">,</span>
+ <span class="n">maxNumIterations</span><span class="o">,</span>
+ <span class="n">regParam</span><span class="o">,</span>
+ <span class="n">initialWeightsWithIntercept</span><span class="o">)</span>
+
+<span class="k">val</span> <span class="n">model</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LogisticRegressionModel</span><span class="o">(</span>
+ <span class="nc">Vectors</span><span class="o">.</span><span class="n">dense</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">toArray</span><span class="o">.</span><span class="n">slice</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">)),</span>
+ <span class="n">weightsWithIntercept</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="n">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">))</span>
+
+<span class="c1">// Clear the default threshold.</span>
+<span class="n">model</span><span class="o">.</span><span class="n">clearThreshold</span><span class="o">()</span>
+
+<span class="c1">// Compute raw scores on the test set.</span>
+<span class="k">val</span> <span class="n">scoreAndLabels</span> <span class="k">=</span> <span class="n">test</span><span class="o">.</span><span class="n">map</span> <span class="o">{</span> <span class="n">point</span> <span class="k">=&gt;</span>
+ <span class="k">val</span> <span class="n">score</span> <span class="k">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="o">(</span><span class="n">point</span><span class="o">.</span><span class="n">features</span><span class="o">)</span>
+ <span class="o">(</span><span class="n">score</span><span class="o">,</span> <span class="n">point</span><span class="o">.</span><span class="n">label</span><span class="o">)</span>
+<span class="o">}</span>
+
+<span class="c1">// Get evaluation metrics.</span>
+<span class="k">val</span> <span class="n">metrics</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">BinaryClassificationMetrics</span><span class="o">(</span><span class="n">scoreAndLabels</span><span class="o">)</span>
+<span class="k">val</span> <span class="n">auROC</span> <span class="k">=</span> <span class="n">metrics</span><span class="o">.</span><span class="n">areaUnderROC</span><span class="o">()</span>
+
+<span class="n">println</span><span class="o">(</span><span class="s">&quot;Loss of each step in training process&quot;</span><span class="o">)</span>
+<span class="n">loss</span><span class="o">.</span><span class="n">foreach</span><span class="o">(</span><span class="n">println</span><span class="o">)</span>
+<span class="n">println</span><span class="o">(</span><span class="s">&quot;Area under ROC = &quot;</span> <span class="o">+</span> <span class="n">auROC</span><span class="o">)</span>
+</code></pre></div>
+
+<h4 id="developers-note">Developer&#8217;s note</h4>
+<p>Since the Hessian is constructed approximately from previous gradient evaluations,
+the objective function can not be changed during the optimization process.
+As a result, Stochastic L-BFGS will not work naively by just using miniBatch;
+therefore, we don&#8217;t provide this until we have better understanding.</p>
+
+<ul>
+ <li><code>Updater</code> is a class originally designed for gradient decent which computes
+the actual gradient descent step. However, we&#8217;re able to take the gradient and
+loss of objective function of regularization for L-BFGS by ignoring the part of logic
+only for gradient decent such as adaptive step size stuff. We will refactorize
+this into regularizer to replace updater to separate the logic between
+regularization and step update later. </li>
+</ul>
+
+
+ </div> <!-- /container -->
+
+ <script src="js/vendor/jquery-1.8.0.min.js"></script>
+ <script src="js/vendor/bootstrap.min.js"></script>
+ <script src="js/main.js"></script>
+
+ <!-- MathJax Section -->
+ <script type="text/x-mathjax-config">
+ MathJax.Hub.Config({
+ TeX: { equationNumbers: { autoNumber: "AMS" } }
+ });
+ </script>
+ <script type="text/javascript"
+ src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
+ <script>
+ MathJax.Hub.Config({
+ tex2jax: {
+ inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
+ displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
+ processEscapes: true,
+ skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+ }
+ });
+ </script>
+ </body>
+</html>