aboutsummaryrefslogtreecommitdiff
path: root/yarn
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-6325] [core,yarn] Do not change target executor count when killing ↵Marcelo Vanzin2015-03-183-6/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | executors. The dynamic execution code has two ways to reduce the number of executors: one where it reduces the total number of executors it wants, by asking for an absolute number of executors that is lower than the previous one. The second is by explicitly killing idle executors. YarnAllocator was mixing those up and lowering the target number of executors when a kill was issued. Instead, trust the frontend knows what it's doing, and kill executors without messing with other accounting. That means that if the frontend kills an executor without lowering the target, it will get a new executor shortly. The one situation where both actions (lower the target and kill executor) need to happen together is when user code explicitly calls `SparkContext.killExecutors`. In that case, issue two calls to the backend to achieve the goal. I also did some minor cleanup in related code: - avoid sending a request for executors when target is unchanged, to avoid log spam in the AM - avoid printing misleading log messages in the AM when there are no requests to cancel - fix a slow memory leak plus misleading error message on the driver caused by failing to completely unregister the executor. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #5018 from vanzin/SPARK-6325 and squashes the following commits: 2e782a3 [Marcelo Vanzin] Avoid redundant logging on the AM side. a3567cd [Marcelo Vanzin] Add parentheses. a363926 [Marcelo Vanzin] Update logic. a158101 [Marcelo Vanzin] [SPARK-6325] [core,yarn] Disallow reducing executor count past running count. (cherry picked from commit 981fbafa2a878e86abeefe1d77cca01fd848f9f6) Signed-off-by: Sean Owen <sowen@cloudera.com>
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-03-051-1/+1
|
* Preparing Spark release v1.3.0-rc3v1.3.0Patrick Wendell2015-03-051-1/+1
|
* Revert "Preparing Spark release v1.3.0-rc3"Patrick Wendell2015-03-051-1/+1
| | | | This reverts commit 6fb4af2fbeb3d1b888191a2fa1042c80e3ef2d60.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-03-051-1/+1
| | | | This reverts commit 5097f869efbdb75d3b87bcbd8e621e7c12356942.
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-03-051-1/+1
|
* Preparing Spark release v1.3.0-rc3Patrick Wendell2015-03-051-1/+1
|
* SPARK-6182 [BUILD] spark-parent pom needs to be published for both 2.10 and 2.11Sean Owen2015-03-051-1/+1
| | | | | | | | | | | | | Option 1 of 2: Convert spark-parent module name to spark-parent_2.10 / spark-parent_2.11 Author: Sean Owen <sowen@cloudera.com> Closes #4912 from srowen/SPARK-6182.1 and squashes the following commits: eff60de [Sean Owen] Convert spark-parent module name to spark-parent_2.10 / spark-parent_2.11 (cherry picked from commit c9cfba0cebe3eb546e3e96f3e5b9b89a74c5b7de) Signed-off-by: Patrick Wendell <patrick@databricks.com>
* Revert "Preparing Spark release v1.3.0-rc3"Patrick Wendell2015-03-041-1/+1
| | | | This reverts commit 430a879699d2d41bc65f5c00b1f239d15fd5e549.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-03-041-1/+1
| | | | This reverts commit 0ecab40e4391d0674ac86595ec09af3b9a4ac50d.
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-03-051-1/+1
|
* Preparing Spark release v1.3.0-rc3Patrick Wendell2015-03-051-1/+1
|
* Revert "Preparing Spark release v1.3.0-rc2"Patrick Wendell2015-03-041-1/+1
| | | | This reverts commit 3af26870e5163438868c4eb2df88380a533bb232.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-03-041-1/+1
| | | | This reverts commit 05d5a29eb3193aeb57d177bafe39eb75edce72a1.
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-03-031-1/+1
|
* Preparing Spark release v1.3.0-rc2Patrick Wendell2015-03-031-1/+1
|
* Revert "Preparing Spark release v1.3.0-rc1"Patrick Wendell2015-03-031-1/+1
| | | | This reverts commit f97b0d4a6b26504916816d7aefcf3132cd1da6c2.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-03-031-1/+1
| | | | This reverts commit 2ab0ba04f66683be25cbe0e83cecf2bdcb0f13ba.
* [SPARK-6048] SparkConf should not translate deprecated configs on setAndrew Or2015-03-021-1/+2
| | | | | | | | | | | | | | | | | | | | There are multiple issues with translating on set outlined in the JIRA. This PR reverts the translation logic added to `SparkConf`. In the future, after the 1.3.0 release we will figure out a way to reorganize the internal structure more elegantly. For now, let's preserve the existing semantics of `SparkConf` since it's a public interface. Unfortunately this means duplicating some code for now, but this is all internal and we can always clean it up later. Author: Andrew Or <andrew@databricks.com> Closes #4799 from andrewor14/conf-set-translate and squashes the following commits: 11c525b [Andrew Or] Move warning to driver 10e77b5 [Andrew Or] Add documentation for deprecation precedence a369cb1 [Andrew Or] Merge branch 'master' of github.com:apache/spark into conf-set-translate c26a9e3 [Andrew Or] Revert all translate logic in SparkConf fef6c9c [Andrew Or] Restore deprecation logic for spark.executor.userClassPathFirst 94b4dfa [Andrew Or] Translate on get, not set (cherry picked from commit 258d154c9f1afdd52dce19f03d81683ee34effac) Signed-off-by: Patrick Wendell <patrick@databricks.com>
* [SPARK-6050] [yarn] Relax matching of vcore count in received containers.Marcelo Vanzin2015-03-021-2/+8
| | | | | | | | | | | | | | | | | | Some YARN configurations return a vcore count for allocated containers that does not match the requested resource. That means Spark would always ignore those containers. So relax the the matching of the vcore count to allow the Spark jobs to run. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #4818 from vanzin/SPARK-6050 and squashes the following commits: 991c803 [Marcelo Vanzin] Remove config option, standardize on legacy behavior (no vcore matching). 8c9c346 [Marcelo Vanzin] Restrict lax matching to vcores only. 3359692 [Marcelo Vanzin] [SPARK-6050] [yarn] Add config option to do lax resource matching. (cherry picked from commit 6b348d90f475440c285a4b636134ffa9351580b9) Signed-off-by: Thomas Graves <tgraves@apache.org>
* [SPARK-6058][Yarn] Log the user class exception in ApplicationMasterzsxwing2015-02-271-2/+1
| | | | | | | | | | | | | Because ApplicationMaster doesn't set SparkUncaughtExceptionHandler, the exception in the user class won't be logged. This PR added a `logError` for it. Author: zsxwing <zsxwing@gmail.com> Closes #4813 from zsxwing/SPARK-6058 and squashes the following commits: 806c932 [zsxwing] Log the user class exception (cherry picked from commit e747e98490f8ede23b0a9e0795e7445d0b597624) Signed-off-by: Sean Owen <sowen@cloudera.com>
* [SPARK-5951][YARN] Remove unreachable driver memory properties in yarn ↵mohit.goyal2015-02-261-6/+0
| | | | | | | | | | | | | | | client mode Remove unreachable driver memory properties in yarn client mode Author: mohit.goyal <mohit.goyal@guavus.com> Closes #4730 from zuxqoj/master and squashes the following commits: 977dc96 [mohit.goyal] remove not rechable deprecated variables in yarn client mode (cherry picked from commit b38dec2ffdf724ff4e181cc8c7427d074b442670) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AMCheolsoo Park2015-02-261-3/+3
| | | | | | | | | | | | | Author: Cheolsoo Park <cheolsoop@netflix.com> Closes #4773 from piaozhexiu/SPARK-6018 and squashes the following commits: 2a919d5 [Cheolsoo Park] Rename e with cause to avoid duplicate names 1e71d2d [Cheolsoo Park] Replace placeholder with throwable eb5750d [Cheolsoo Park] NoSuchMethodError in Spark app is swallowed by YARN AM (cherry picked from commit 5f3238b3b0157091d28803aa3b1d248dfa6cdc59) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-5937][YARN] Fix ClientSuite to set YARN mode, so that the correct ↵Hari Shreedharan2015-02-211-3/+10
| | | | | | | | | | | | | | | | | | class is used in t... ...ests. Without this SparkHadoopUtil is used by the Client instead of YarnSparkHadoopUtil. Author: Hari Shreedharan <hshreedharan@apache.org> Closes #4711 from harishreedharan/SPARK-5937 and squashes the following commits: d154de6 [Hari Shreedharan] Use System.clearProperty() instead of setting the value of SPARK_YARN_MODE to empty string. f729f70 [Hari Shreedharan] Fix ClientSuite to set YARN mode, so that the correct class is used in tests. (cherry picked from commit 7138816abe1060a1e967c4c77c72d5752586d557) Signed-off-by: Andrew Or <andrew@databricks.com>
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-02-181-1/+1
|
* Preparing Spark release v1.3.0-rc1Patrick Wendell2015-02-181-1/+1
|
* Revert "Preparing Spark release v1.3.0-snapshot1"Patrick Wendell2015-02-171-1/+1
| | | | This reverts commit d97bfc6f28ec4b7acfb36410c7c167d8d3c145ec.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-02-171-1/+1
| | | | This reverts commit e57c81b8c1a6581c2588973eaf30d3c7ae90ed0c.
* [SPARK-5759][Yarn]ExecutorRunnable should catch YarnException while NMClient ↵lianhuiwang2015-02-121-2/+8
| | | | | | | | | | | | | | | | | | start contain... some time since some reasons, it lead to some exception while NMClient start some containers.example:we do not config spark_shuffle on some machines, so it will throw a exception: java.lang.Error: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist. because YarnAllocator use ThreadPoolExecutor to start Container, so we can not find which container or hostname throw exception. I think we should catch YarnException in ExecutorRunnable when start container. if there are some exceptions, we can know the container id or hostname of failed container. Author: lianhuiwang <lianhuiwang09@gmail.com> Closes #4554 from lianhuiwang/SPARK-5759 and squashes the following commits: caf5a99 [lianhuiwang] use SparkException to warp exception c02140f [lianhuiwang] ExecutorRunnable should catch YarnException while NMClient start container (cherry picked from commit 947b8bd82ec0f4c45910e6d781df4661f56e4587) Signed-off-by: Andrew Or <andrew@databricks.com>
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-02-111-1/+1
|
* Preparing Spark release v1.3.0-snapshot1Patrick Wendell2015-02-111-1/+1
|
* Revert "Preparing Spark release v1.3.0-snapshot1"Patrick Wendell2015-02-101-1/+1
| | | | This reverts commit 53068f56f40bf03b7fc52e5980fb7e205903fc8b.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-02-101-1/+1
| | | | This reverts commit ba12b793f1f4f432e71439e2a7ebacce74d9c472.
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-02-111-1/+1
|
* Preparing Spark release v1.3.0-snapshot1Patrick Wendell2015-02-111-1/+1
|
* Revert "Preparing Spark release v1.3.0-snapshot1"Patrick Wendell2015-02-101-1/+1
| | | | This reverts commit c2e4001030cfb881ff33d448fc0aeaf4f05dad0f.
* Revert "Preparing development version 1.3.1-SNAPSHOT"Patrick Wendell2015-02-101-1/+1
| | | | This reverts commit db80d0fe21daa3202ff217cbefb999ce77c5aa9e.
* Preparing development version 1.3.1-SNAPSHOTPatrick Wendell2015-02-111-1/+1
|
* Preparing Spark release v1.3.0-snapshot1Patrick Wendell2015-02-111-1/+1
|
* SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread ↵Kashish Jain2015-02-101-2/+9
| | | | | | | | | | | | | from getting killed on yarn restart. [SPARK-5613] Added a catch block to catch the ApplicationNotFoundException. Without this catch block the thread gets killed on occurrence of this exception. This Exception occurs when yarn restarts and tries to find an application id for a spark job which got interrupted due to yarn getting stopped. See the stacktrace in the bug for more details. Author: Kashish Jain <kashish.jain@guavus.com> Closes #4392 from kasjain/branch-1.2 and squashes the following commits: 4831000 [Kashish Jain] SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread from getting killed on yarn restart.
* [SPARK-2996] Implement userClassPathFirst for driver, yarn.Marcelo Vanzin2015-02-096-168/+301
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Yarn's config option `spark.yarn.user.classpath.first` does not work the same way as `spark.files.userClassPathFirst`; Yarn's version is a lot more dangerous, in that it modifies the system classpath, instead of restricting the changes to the user's class loader. So this change implements the behavior of the latter for Yarn, and deprecates the more dangerous choice. To be able to achieve feature-parity, I also implemented the option for drivers (the existing option only applies to executors). So now there are two options, each controlling whether to apply userClassPathFirst to the driver or executors. The old option was deprecated, and aliased to the new one (`spark.executor.userClassPathFirst`). The existing "child-first" class loader also had to be fixed. It didn't handle resources, and it was also doing some things that ended up causing JVM errors depending on how things were being called. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #3233 from vanzin/SPARK-2996 and squashes the following commits: 9cf9cf1 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 a1499e2 [Marcelo Vanzin] Remove SPARK_HOME propagation. fa7df88 [Marcelo Vanzin] Remove 'test.resource' file, create it dynamically. a8c69f1 [Marcelo Vanzin] Review feedback. cabf962 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 a1b8d7e [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 3f768e3 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 2ce3c7a [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 0e6d6be [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 70d4044 [Marcelo Vanzin] Fix pyspark/yarn-cluster test. 0fe7777 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 0e6ef19 [Marcelo Vanzin] Move class loaders around and make names more meaninful. fe970a7 [Marcelo Vanzin] Review feedback. 25d4fed [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 3cb6498 [Marcelo Vanzin] Call the right loadClass() method on the parent. fbb8ab5 [Marcelo Vanzin] Add locking in loadClass() to avoid deadlocks. 2e6c4b7 [Marcelo Vanzin] Mention new setting in documentation. b6497f9 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 a10f379 [Marcelo Vanzin] Some feedback. 3730151 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 f513871 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 44010b6 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 7b57cba [Marcelo Vanzin] Remove now outdated message. 5304d64 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 35949c8 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 54e1a98 [Marcelo Vanzin] Merge branch 'master' into SPARK-2996 d1273b2 [Marcelo Vanzin] Add test file to rat exclude. fa1aafa [Marcelo Vanzin] Remove write check on user jars. 89d8072 [Marcelo Vanzin] Cleanups. a963ea3 [Marcelo Vanzin] Implement spark.driver.userClassPathFirst for standalone cluster mode. 50afa5f [Marcelo Vanzin] Fix Yarn executor command line. 7d14397 [Marcelo Vanzin] Register user jars in executor up front. 7f8603c [Marcelo Vanzin] Fix yarn-cluster mode without userClassPathFirst. 20373f5 [Marcelo Vanzin] Fix ClientBaseSuite. 55c88fa [Marcelo Vanzin] Run all Yarn integration tests via spark-submit. 0b64d92 [Marcelo Vanzin] Add deprecation warning to yarn option. 4a84d87 [Marcelo Vanzin] Fix the child-first class loader. d0394b8 [Marcelo Vanzin] Add "deprecated configs" to SparkConf. 46d8cf2 [Marcelo Vanzin] Update doc with new option, change name to "userClassPathFirst". a314f2d [Marcelo Vanzin] Enable driver class path isolation in SparkSubmit. 91f7e54 [Marcelo Vanzin] [yarn] Enable executor class path isolation. a853e74 [Marcelo Vanzin] Re-work CoarseGrainedExecutorBackend command line arguments. 89522ef [Marcelo Vanzin] Add class path isolation support for Yarn cluster mode. (cherry picked from commit 20a6013106b56a1a1cc3e8cda092330ffbe77cc3) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-4267 [YARN] Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 ↵Sean Owen2015-02-093-14/+18
| | | | | | | | | | | | | | | | | or later Before passing to YARN, escape arguments in "extraJavaOptions" args, in order to correctly handle cases like -Dfoo="one two three". Also standardize how these args are handled and ensure that individual args are treated as stand-alone args, not one string. vanzin andrewor14 Author: Sean Owen <sowen@cloudera.com> Closes #4452 from srowen/SPARK-4267.2 and squashes the following commits: c8297d2 [Sean Owen] Before passing to YARN, escape arguments in "extraJavaOptions" args, in order to correctly handle cases like -Dfoo="one two three". Also standardize how these args are handled and ensure that individual args are treated as stand-alone args, not one string. (cherry picked from commit de7806048ac49a8bfdf44d8f87bc11cea1dfb242) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-2450 Adds executor log links to Web UIKostas Sakellis2015-02-062-5/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | Adds links to stderr/stdout in the executor tab of the webUI for: 1) Standalone 2) Yarn client 3) Yarn cluster This tries to add the log url support in a general way so as to make it easy to add support for all the cluster managers. This is done by using environment variables to pass to the executor the log urls. The SPARK_LOG_URL_ prefix is used and so additional logs besides stderr/stdout can also be added. To propagate this information to the UI we use the onExecutorAdded spark listener event. Although this commit doesn't add log urls when running on a mesos cluster, it should be possible to add using the same mechanism. Author: Kostas Sakellis <kostas@cloudera.com> Author: Josh Rosen <joshrosen@databricks.com> Closes #3486 from ksakellis/kostas-spark-2450 and squashes the following commits: d190936 [Josh Rosen] Fix a few minor style / formatting nits. Reset listener after each test Don't null listener out at end of main(). 8673fe1 [Kostas Sakellis] CR feedback. Hide the log column if there are no logs available 5bf6952 [Kostas Sakellis] [SPARK-2450] [CORE] Adds exeuctor log links to Web UI (cherry picked from commit 32e964c410e7083b43264c46291e93cd206a8038) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* SPARK-4337. [YARN] Add ability to cancel pending requestsSandy Ryza2015-02-062-30/+89
| | | | | | | | | | | | Author: Sandy Ryza <sandy@cloudera.com> Closes #4141 from sryza/sandy-spark-4337 and squashes the following commits: a98bd20 [Sandy Ryza] Andrew's comments cdaab7f [Sandy Ryza] SPARK-4337. Add ability to cancel pending requests to YARN (cherry picked from commit 1a88f20de798030a7d5713bd267f612ba5617fca) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-5653][YARN] In ApplicationMaster rename isDriver to isClusterModelianhuiwang2015-02-061-12/+12
| | | | | | | | | | | | | | in ApplicationMaster rename isDriver to isClusterMode,because in Client it uses isClusterMode,ApplicationMaster should keep consistent with it and uses isClusterMode.Also isClusterMode is easier to understand. andrewor14 sryza Author: lianhuiwang <lianhuiwang09@gmail.com> Closes #4430 from lianhuiwang/am-isDriver-rename and squashes the following commits: f9f3ed0 [lianhuiwang] rename isDriver to isClusterMode (cherry picked from commit cc6e53119d7a51b95b19244f50b25814088b4d11) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-5157][YARN] Configure more JVM options properly when we use ↵Kousuke Saruta2015-02-061-0/+2
| | | | | | | | | | | | | | | | | | | ConcMarkSweepGC for AM. When we set `SPARK_USE_CONC_INCR_GC`, ConcurrentMarkSweepGC works on the AM. Actually, if ConcurrentMarkSweepGC is set for the JVM, following JVM options are set automatically and implicitly. * MaxTenuringThreshold=0 * SurvivorRatio=1024 Those can not be proper value for most cases. See also http://www.oracle.com/technetwork/java/tuning-139912.html Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3956 from sarutak/SPARK-5157 and squashes the following commits: c15da4e [Kousuke Saruta] Set more JVM options for AM when enabling CMS
* SPARK-3996: Add jetty servlet and continuations.Patrick Wendell2015-02-021-0/+4
| | | | | | | | | | | | These are needed transitively from the other Jetty libraries we include. It was not picked up by unit tests because we disable the UI. Author: Patrick Wendell <patrick@databricks.com> Closes #4323 from pwendell/jetty and squashes the following commits: d8669da [Patrick Wendell] SPARK-3996: Add jetty servlet and continuations.
* Spark 3883: SSL support for HttpServer and AkkaJacek Lewandowski2015-02-022-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | SPARK-3883: SSL support for Akka connections and Jetty based file servers. This story introduced the following changes: - Introduced SSLOptions object which holds the SSL configuration and can build the appropriate configuration for Akka or Jetty. SSLOptions can be created by parsing SparkConf entries at a specified namespace. - SSLOptions is created and kept by SecurityManager - All Akka actor address creation snippets based on interpolated strings were replaced by a dedicated methods from AkkaUtils. Those methods select the proper Akka protocol - whether akka.tcp or akka.ssl.tcp - Added tests cases for AkkaUtils, FileServer, SSLOptions and SecurityManager - Added a way to use node local SSL configuration by executors and driver in standalone mode. It can be done by specifying spark.ssl.useNodeLocalConf in SparkConf. - Made CoarseGrainedExecutorBackend not overwrite the settings which are executor startup configuration - they are passed anyway from Worker Refer to https://github.com/apache/spark/pull/3571 for discussion and details Author: Jacek Lewandowski <lewandowski.jacek@gmail.com> Author: Jacek Lewandowski <jacek.lewandowski@datastax.com> Closes #3571 from jacek-lewandowski/SPARK-3883-master and squashes the following commits: 9ef4ed1 [Jacek Lewandowski] Merge pull request #2 from jacek-lewandowski/SPARK-3883-docs2 fb31b49 [Jacek Lewandowski] SPARK-3883: Added SSL setup documentation 2532668 [Jacek Lewandowski] SPARK-3883: Refactored AkkaUtils.protocol method to not use Try 90a8762 [Jacek Lewandowski] SPARK-3883: Refactored methods to resolve Akka address and made it possible to easily configure multiple communication layers for SSL 72b2541 [Jacek Lewandowski] SPARK-3883: A reference to the fallback SSLOptions can be provided when constructing SSLOptions 93050f4 [Jacek Lewandowski] SPARK-3883: SSL support for HttpServer and Akka
* [HOTFIX] Add jetty references to build for YARN module.Patrick Wendell2015-02-021-0/+24
|
* [SPARK-5530] Add executor container to executorIdToContainerXutingjun2015-02-021-0/+1
| | | | | | | | | | when call killExecutor method, it will only go to the else branch, because the variable executorIdToContainer never be put any value. Author: Xutingjun <1039320815@qq.com> Closes #4309 from XuTingjun/dynamicAllocator and squashes the following commits: c823418 [Xutingjun] fix bugwq