aboutsummaryrefslogtreecommitdiff
path: root/repl/src
diff options
context:
space:
mode:
authorSteve Loughran <stevel@apache.org>2016-08-17 11:42:57 -0700
committerMarcelo Vanzin <vanzin@cloudera.com>2016-08-17 11:43:01 -0700
commitcc97ea188e1d5b8e851d1a8438b8af092783ec04 (patch)
tree6128ca95597fd4765608e986f501dafcbbccf464 /repl/src
parent4d92af310ad29ade039e4130f91f2a3d9180deef (diff)
downloadspark-cc97ea188e1d5b8e851d1a8438b8af092783ec04.tar.gz
spark-cc97ea188e1d5b8e851d1a8438b8af092783ec04.tar.bz2
spark-cc97ea188e1d5b8e851d1a8438b8af092783ec04.zip
[SPARK-16736][CORE][SQL] purge superfluous fs calls
A review of the code, working back from Hadoop's `FileSystem.exists()` and `FileSystem.isDirectory()` code, then removing uses of the calls when superfluous. 1. delete is harmless if called on a nonexistent path, so don't do any checks before deletes 1. any `FileSystem.exists()` check before `getFileStatus()` or `open()` is superfluous as the operation itself does the check. Instead the `FileNotFoundException` is caught and triggers the downgraded path. When a `FileNotFoundException` was thrown before, the code still creates a new FNFE with the error messages. Though now the inner exceptions are nested, for easier diagnostics. Initially, relying on Jenkins test runs. One troublespot here is that some of the codepaths are clearly error situations; it's not clear that they have coverage anyway. Trying to create the failure conditions in tests would be ideal, but it will also be hard. Author: Steve Loughran <stevel@apache.org> Closes #14371 from steveloughran/cloud/SPARK-16736-superfluous-fs-calls.
Diffstat (limited to 'repl/src')
-rw-r--r--repl/src/main/scala/org/apache/spark/repl/ExecutorClassLoader.scala9
1 files changed, 5 insertions, 4 deletions
diff --git a/repl/src/main/scala/org/apache/spark/repl/ExecutorClassLoader.scala b/repl/src/main/scala/org/apache/spark/repl/ExecutorClassLoader.scala
index 2f07395edf..df13b32451 100644
--- a/repl/src/main/scala/org/apache/spark/repl/ExecutorClassLoader.scala
+++ b/repl/src/main/scala/org/apache/spark/repl/ExecutorClassLoader.scala
@@ -17,7 +17,7 @@
package org.apache.spark.repl
-import java.io.{ByteArrayOutputStream, FilterInputStream, InputStream, IOException}
+import java.io.{ByteArrayOutputStream, FileNotFoundException, FilterInputStream, InputStream, IOException}
import java.net.{HttpURLConnection, URI, URL, URLEncoder}
import java.nio.channels.Channels
@@ -147,10 +147,11 @@ class ExecutorClassLoader(
private def getClassFileInputStreamFromFileSystem(fileSystem: FileSystem)(
pathInDirectory: String): InputStream = {
val path = new Path(directory, pathInDirectory)
- if (fileSystem.exists(path)) {
+ try {
fileSystem.open(path)
- } else {
- throw new ClassNotFoundException(s"Class file not found at path $path")
+ } catch {
+ case _: FileNotFoundException =>
+ throw new ClassNotFoundException(s"Class file not found at path $path")
}
}