diff options
author | Kousuke Saruta <sarutak@oss.nttdata.co.jp> | 2014-08-20 13:26:11 -0700 |
---|---|---|
committer | Michael Armbrust <michael@databricks.com> | 2014-08-20 13:26:11 -0700 |
commit | 0ea46ac80089e9091d247704b17afbc423c0060d (patch) | |
tree | e9f926eb2e96a9dda79d7d0160b65a5297fa0f24 /core | |
parent | cf46e725814f575ebb417e80d2571bccc6dac4a7 (diff) | |
download | spark-0ea46ac80089e9091d247704b17afbc423c0060d.tar.gz spark-0ea46ac80089e9091d247704b17afbc423c0060d.tar.bz2 spark-0ea46ac80089e9091d247704b17afbc423c0060d.zip |
[SPARK-3062] [SPARK-2970] [SQL] spark-sql script ends with IOException when EventLogging is enabled
#1891 was to avoid IOException when EventLogging is enabled.
The solution used ShutdownHookManager but it was defined only Hadoop 2.x. Hadoop 1.x don't have ShutdownHookManager so #1891 doesn't compile on Hadoop 1.x
Now, I had a compromised solution for both Hadoop 1.x and 2.x.
Only for FileLogger, an unique FileSystem object is created.
Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
Closes #1970 from sarutak/SPARK-2970 and squashes the following commits:
240c91e [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2970
0e7b45d [Kousuke Saruta] Revert "[SPARK-2970] [SQL] spark-sql script ends with IOException when EventLogging is enabled"
e1262ec [Kousuke Saruta] Modified Filelogger to use unique FileSystem instance
Diffstat (limited to 'core')
-rw-r--r-- | core/src/main/scala/org/apache/spark/util/FileLogger.scala | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/core/src/main/scala/org/apache/spark/util/FileLogger.scala b/core/src/main/scala/org/apache/spark/util/FileLogger.scala index 2e8fbf5a91..ad8b79af87 100644 --- a/core/src/main/scala/org/apache/spark/util/FileLogger.scala +++ b/core/src/main/scala/org/apache/spark/util/FileLogger.scala @@ -52,7 +52,20 @@ private[spark] class FileLogger( override def initialValue(): SimpleDateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss") } - private val fileSystem = Utils.getHadoopFileSystem(logDir) + /** + * To avoid effects of FileSystem#close or FileSystem.closeAll called from other modules, + * create unique FileSystem instance only for FileLogger + */ + private val fileSystem = { + val conf = SparkHadoopUtil.get.newConfiguration() + val logUri = new URI(logDir) + val scheme = logUri.getScheme + if (scheme == "hdfs") { + conf.setBoolean("fs.hdfs.impl.disable.cache", true) + } + FileSystem.get(logUri, conf) + } + var fileIndex = 0 // Only used if compression is enabled |