aboutsummaryrefslogtreecommitdiff
path: root/examples
diff options
context:
space:
mode:
authorYuhao Yang <hhbyyh@gmail.com>2015-12-08 11:46:26 -0800
committerJoseph K. Bradley <joseph@databricks.com>2015-12-08 11:46:26 -0800
commit5cb4695051e3dac847b1ea14d62e54dcf672c31c (patch)
treee75ce10784a244e720049652896a70ca0a99c306 /examples
parent4bcb894948c1b7294d84e2bf58abb1d79e6759c6 (diff)
downloadspark-5cb4695051e3dac847b1ea14d62e54dcf672c31c.tar.gz
spark-5cb4695051e3dac847b1ea14d62e54dcf672c31c.tar.bz2
spark-5cb4695051e3dac847b1ea14d62e54dcf672c31c.zip
[SPARK-11605][MLLIB] ML 1.6 QA: API: Java compatibility, docs
jira: https://issues.apache.org/jira/browse/SPARK-11605 Check Java compatibility for MLlib for this release. fix: 1. `StreamingTest.registerStream` needs java friendly interface. 2. `GradientBoostedTreesModel.computeInitialPredictionAndError` and `GradientBoostedTreesModel.updatePredictionError` has java compatibility issue. Mark them as `developerAPI`. TBD: [updated] no fix for now per discussion. `org.apache.spark.mllib.classification.LogisticRegressionModel` `public scala.Option<java.lang.Object> getThreshold();` has wrong return type for Java invocation. `SVMModel` has the similar issue. Yet adding a `scala.Option<java.util.Double> getThreshold()` would result in an overloading error due to the same function signature. And adding a new function with different name seems to be not necessary. cc jkbradley feynmanliang Author: Yuhao Yang <hhbyyh@gmail.com> Closes #10102 from hhbyyh/javaAPI.
Diffstat (limited to 'examples')
-rw-r--r--examples/src/main/scala/org/apache/spark/examples/mllib/StreamingTestExample.scala4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingTestExample.scala b/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingTestExample.scala
index b6677c6476..49f5df3944 100644
--- a/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingTestExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingTestExample.scala
@@ -18,7 +18,7 @@
package org.apache.spark.examples.mllib
import org.apache.spark.SparkConf
-import org.apache.spark.mllib.stat.test.StreamingTest
+import org.apache.spark.mllib.stat.test.{BinarySample, StreamingTest}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.util.Utils
@@ -66,7 +66,7 @@ object StreamingTestExample {
// $example on$
val data = ssc.textFileStream(dataDir).map(line => line.split(",") match {
- case Array(label, value) => (label.toBoolean, value.toDouble)
+ case Array(label, value) => BinarySample(label.toBoolean, value.toDouble)
})
val streamingTest = new StreamingTest()