aboutsummaryrefslogtreecommitdiff
path: root/docs/ml-guide.md
diff options
context:
space:
mode:
authorMatt Hagen <anonz3000@gmail.com>2015-09-22 21:14:25 -0700
committerXiangrui Meng <meng@databricks.com>2015-09-22 21:14:25 -0700
commit558e9c7e60a7c0d85ba26634e97562ad2163e91d (patch)
treef98f30b7340db930393cadd6b5816208efcae6d4 /docs/ml-guide.md
parent84f81e035e1dab1b42c36563041df6ba16e7b287 (diff)
downloadspark-558e9c7e60a7c0d85ba26634e97562ad2163e91d.tar.gz
spark-558e9c7e60a7c0d85ba26634e97562ad2163e91d.tar.bz2
spark-558e9c7e60a7c0d85ba26634e97562ad2163e91d.zip
[SPARK-10663] Removed unnecessary invocation of DataFrame.toDF method.
The Scala example under the "Example: Pipeline" heading in this document initializes the "test" variable to a DataFrame. Because test is already a DF, there is not need to call test.toDF as the example does in a subsequent line: model.transform(test.toDF). So, I removed the extraneous toDF invocation. Author: Matt Hagen <anonz3000@gmail.com> Closes #8875 from hagenhaus/SPARK-10663.
Diffstat (limited to 'docs/ml-guide.md')
-rw-r--r--docs/ml-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/ml-guide.md b/docs/ml-guide.md
index 0427ac6695..fd3a6167bc 100644
--- a/docs/ml-guide.md
+++ b/docs/ml-guide.md
@@ -475,7 +475,7 @@ val test = sqlContext.createDataFrame(Seq(
)).toDF("id", "text")
// Make predictions on test documents.
-model.transform(test.toDF)
+model.transform(test)
.select("id", "text", "probability", "prediction")
.collect()
.foreach { case Row(id: Long, text: String, prob: Vector, prediction: Double) =>