aboutsummaryrefslogtreecommitdiff
path: root/examples
diff options
context:
space:
mode:
authordding3 <ding.ding@intel.com>2016-05-27 21:01:50 -0500
committerSean Owen <sowen@cloudera.com>2016-05-27 21:01:50 -0500
commit88c9c467a31630c558719679ca0894873a268b27 (patch)
treec9d0db10251a19814d9c2afcc7dad6a74f919e2e /examples
parent5d4dafe8fdea49dcbd6b0e4c23e3791fa30c8911 (diff)
downloadspark-88c9c467a31630c558719679ca0894873a268b27.tar.gz
spark-88c9c467a31630c558719679ca0894873a268b27.tar.bz2
spark-88c9c467a31630c558719679ca0894873a268b27.zip
[SPARK-15562][ML] Delete temp directory after program exit in DataFrameExample
## What changes were proposed in this pull request? Temp directory used to save records is not deleted after program exit in DataFrameExample. Although it called deleteOnExit, it doesn't work as the directory is not empty. Similar things happend in ContextCleanerSuite. Update the code to make sure temp directory is deleted after program exit. ## How was this patch tested? unit tests and local build. Author: dding3 <ding.ding@intel.com> Closes #13328 from dding3/master.
Diffstat (limited to 'examples')
-rw-r--r--examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala b/examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala
index c69027babb..11faa6192b 100644
--- a/examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala
@@ -28,6 +28,7 @@ import org.apache.spark.ml.linalg.Vector
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
+import org.apache.spark.util.Utils
/**
* An example of how to use [[org.apache.spark.sql.DataFrame]] for ML. Run with
@@ -86,8 +87,7 @@ object DataFrameExample {
println(s"Selected features column with average values:\n ${featureSummary.mean.toString}")
// Save the records in a parquet file.
- val tmpDir = Files.createTempDir()
- tmpDir.deleteOnExit()
+ val tmpDir = Utils.createTempDir()
val outputDir = new File(tmpDir, "dataframe").toString
println(s"Saving to $outputDir as Parquet file.")
df.write.parquet(outputDir)