diff options
author | Christopher Vogt <oss.nsp@cvogt.org> | 2016-11-27 16:00:58 -0500 |
---|---|---|
committer | Christopher Vogt <oss.nsp@cvogt.org> | 2017-02-01 23:10:49 -0500 |
commit | e56f8afa03035140280bc8d3d878ad225def381e (patch) | |
tree | c04ee4dee271ebf84777c0b47a639fde475fa9bf /stage1/KeyLockedLazyCache.scala | |
parent | 00d9485f5597fdecc58461bd81df635fafbe494f (diff) | |
download | cbt-e56f8afa03035140280bc8d3d878ad225def381e.tar.gz cbt-e56f8afa03035140280bc8d3d878ad225def381e.tar.bz2 cbt-e56f8afa03035140280bc8d3d878ad225def381e.zip |
replace flawed concurrent hashmap cache with consistent replacement
The concurrent hashmap approach to classloader caching was flawed.
Assume you have two concurrently running builds A and B and projects
P2 and P3 depending on project P1. And assume a time sequence where A
compiles P1, then compiles P2, then P1’s sources change, then B compiles
P1, then A compiles P3. At the end P2 and P3 will have different
versions of P1 as their parent classloaders. This is inconsistent.
The easiest way to work around this is making sure only one thread is
changing the classloader cache during it’s entire run. This would mean
either no concurrency or what we have done here, which is letting
threads work on a copy of the cache and replace the original cache in
the end using an atomic operation. This means the thread that finishes
last wins, but for caching that’s fine. Worst case some things aren’t
cached in a concurrent execution.
This change also means that we don’t need concurrent hashmaps for the
classloader cache anymore since no two theads will access the same
hashmap. We still need a concurrent hashmap for the class caches inside
of the classloaders as multiple threads can access the same
classloaders.
Diffstat (limited to 'stage1/KeyLockedLazyCache.scala')
-rw-r--r-- | stage1/KeyLockedLazyCache.scala | 4 |
1 files changed, 1 insertions, 3 deletions
diff --git a/stage1/KeyLockedLazyCache.scala b/stage1/KeyLockedLazyCache.scala index 2602523..2047b81 100644 --- a/stage1/KeyLockedLazyCache.scala +++ b/stage1/KeyLockedLazyCache.scala @@ -1,7 +1,5 @@ package cbt -import java.util.concurrent.ConcurrentHashMap - private[cbt] class LockableKey /** A hashMap that lazily computes values if needed during lookup. @@ -9,7 +7,7 @@ Locking occurs on the key, so separate keys can be looked up simultaneously without a deadlock. */ final private[cbt] class KeyLockedLazyCache[T <: AnyRef]( - val hashMap: ConcurrentHashMap[AnyRef,AnyRef], + val hashMap: java.util.Map[AnyRef,AnyRef], logger: Option[Logger] ){ def get( key: AnyRef, value: => T ): T = { |