diff options
author | Patrick Wendell <pwendell@gmail.com> | 2013-12-31 10:12:51 -0800 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2013-12-31 10:12:51 -0800 |
commit | 55b7e2fdffc6c3537da69152a3d02d5be599fa1b (patch) | |
tree | 6ac21f7d330f020b017534cdc4cfd249426015d5 /pom.xml | |
parent | 50e3b8ec4c8150f1cfc6b92f8871f520adf2cfda (diff) | |
parent | fcd17a1e8ef1d0f106e845f4de99533d61cd8695 (diff) | |
download | spark-55b7e2fdffc6c3537da69152a3d02d5be599fa1b.tar.gz spark-55b7e2fdffc6c3537da69152a3d02d5be599fa1b.tar.bz2 spark-55b7e2fdffc6c3537da69152a3d02d5be599fa1b.zip |
Merge pull request #289 from tdas/filestream-fix
Bug fixes for file input stream and checkpointing
- Fixed bugs in the file input stream that led the stream to fail due to transient HDFS errors (listing files when a background thread it deleting fails caused errors, etc.)
- Updated Spark's CheckpointRDD and Streaming's CheckpointWriter to use SparkContext.hadoopConfiguration, to allow checkpoints to be written to any HDFS compatible store requiring special configuration.
- Changed the API of SparkContext.setCheckpointDir() - eliminated the unnecessary 'useExisting' parameter. Now SparkContext will always create a unique subdirectory within the user specified checkpoint directory. This is to ensure that previous checkpoint files are not accidentally overwritten.
- Fixed bug where setting checkpoint directory as a relative local path caused the checkpointing to fail.
Diffstat (limited to 'pom.xml')
0 files changed, 0 insertions, 0 deletions