-
Notifications
You must be signed in to change notification settings - Fork 12.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir] Optimize ThreadLocalCache by removing atomic bottleneck (attempt #3) #93315
Conversation
@llvm/pr-subscribers-mlir @llvm/pr-subscribers-mlir-core Author: Jeff Niu (Mogball) ChangesThe ThreadLocalCache implementation is used by the MLIRContext (among other things) to try to manage thread contention in the StorageUniquers. There is a bunch of fancy shared pointer/weak pointer setups that basically keeps everything alive across threads at the right time, but a huge bottleneck is the This is because the
This PR changes the TLC entries to contain a (1) is no longer the case. When Full diff: https://github.com/llvm/llvm-project/pull/93315.diff 1 Files Affected:
diff --git a/mlir/include/mlir/Support/ThreadLocalCache.h b/mlir/include/mlir/Support/ThreadLocalCache.h
index 1be94ca14bcfa..ca8e22eb6c808 100644
--- a/mlir/include/mlir/Support/ThreadLocalCache.h
+++ b/mlir/include/mlir/Support/ThreadLocalCache.h
@@ -25,28 +25,80 @@ namespace mlir {
/// cache has very large lock contention.
template <typename ValueT>
class ThreadLocalCache {
+ struct PerInstanceState;
+
+ /// The "observer" is owned by a thread-local cache instance. It is
+ /// constructed the first time a `ThreadLocalCache` instance is accessed by a
+ /// thread, unless `perInstanceState` happens to get re-allocated to the same
+ /// address as a previous one. This class is destructed the thread in which
+ /// the `thread_local` cache lives is destroyed.
+ ///
+ /// This class is called the "observer" because while values cached in
+ /// thread-local caches are owned by `PerInstanceState`, a reference is stored
+ /// via this class in the TLC. With a double pointer, it knows when the
+ /// referenced value has been destroyed.
+ struct Observer {
+ /// This is the double pointer, explicitly allocated because we need to keep
+ /// the address stable if the TLC map re-allocates. It is owned by the
+ /// observer and shared with the value owner.
+ std::shared_ptr<ValueT *> ptr = std::make_shared<ValueT *>(nullptr);
+ /// Because `Owner` living inside `PerInstanceState` contains a reference to
+ /// the double pointer, and livkewise this class contains a reference to the
+ /// value, we need to synchronize destruction of the TLC and the
+ /// `PerInstanceState` to avoid racing. This weak pointer is acquired during
+ /// TLC destruction if the `PerInstanceState` hasn't entered its destructor
+ /// yet, and prevents it from happening.
+ std::weak_ptr<PerInstanceState> keepalive;
+ };
+
+ /// This struct owns the cache entries. It contains a reference back to the
+ /// reference inside the cache so that it can be written to null to indicate
+ /// that the cache entry is invalidated. It needs to do this because
+ /// `perInstanceState` could get re-allocated to the same pointer and we don't
+ /// remove entries from the TLC when it is deallocated. Thus, we have to reset
+ /// the TLC entries to a starting state in case the `ThreadLocalCache` lives
+ /// shorter than the threads.
+ struct Owner {
+ /// Save a pointer to the reference and write it to the newly created entry.
+ Owner(Observer &observer)
+ : value(std::make_unique<ValueT>()), ptrRef(observer.ptr) {
+ *observer.ptr = value.get();
+ }
+ ~Owner() {
+ if (std::shared_ptr<ValueT *> ptr = ptrRef.lock())
+ *ptr = nullptr;
+ }
+
+ Owner(Owner &&) = default;
+ Owner &operator=(Owner &&) = default;
+
+ std::unique_ptr<ValueT> value;
+ std::weak_ptr<ValueT *> ptrRef;
+ };
+
// Keep a separate shared_ptr protected state that can be acquired atomically
// instead of using shared_ptr's for each value. This avoids a problem
// where the instance shared_ptr is locked() successfully, and then the
// ThreadLocalCache gets destroyed before remove() can be called successfully.
struct PerInstanceState {
- /// Remove the given value entry. This is generally called when a thread
- /// local cache is destructing.
+ /// Remove the given value entry. This is called when a thread local cache
+ /// is destructing but still contains references to values owned by the
+ /// `PerInstanceState`. Removal is required because it prevents writeback to
+ /// a pointer that was deallocated.
void remove(ValueT *value) {
// Erase the found value directly, because it is guaranteed to be in the
// list.
llvm::sys::SmartScopedLock<true> threadInstanceLock(instanceMutex);
- auto it =
- llvm::find_if(instances, [&](std::unique_ptr<ValueT> &instance) {
- return instance.get() == value;
- });
+ auto it = llvm::find_if(instances, [&](Owner &instance) {
+ return instance.value.get() == value;
+ });
assert(it != instances.end() && "expected value to exist in cache");
instances.erase(it);
}
/// Owning pointers to all of the values that have been constructed for this
/// object in the static cache.
- SmallVector<std::unique_ptr<ValueT>, 1> instances;
+ SmallVector<Owner, 1> instances;
/// A mutex used when a new thread instance has been added to the cache for
/// this object.
@@ -57,13 +109,14 @@ class ThreadLocalCache {
/// instance of the non-static cache and a weak reference to an instance of
/// ValueT. We use a weak reference here so that the object can be destroyed
/// without needing to lock access to the cache itself.
- struct CacheType
- : public llvm::SmallDenseMap<PerInstanceState *, std::weak_ptr<ValueT>> {
+ struct CacheType : public llvm::SmallDenseMap<PerInstanceState *, Observer> {
~CacheType() {
- // Remove the values of this cache that haven't already expired.
- for (auto &it : *this)
- if (std::shared_ptr<ValueT> value = it.second.lock())
- it.first->remove(value.get());
+ // Remove the values of this cache that haven't already expired. This is
+ // required because if we don't remove them, they will contain a reference
+ // back to the data here that is being destroyed.
+ for (auto &[instance, observer] : *this)
+ if (std::shared_ptr<PerInstanceState> state = observer.keepalive.lock())
+ state->remove(*observer.ptr);
}
/// Clear out any unused entries within the map. This method is not
@@ -71,7 +124,7 @@ class ThreadLocalCache {
void clearExpiredEntries() {
for (auto it = this->begin(), e = this->end(); it != e;) {
auto curIt = it++;
- if (curIt->second.expired())
+ if (!*curIt->second.ptr)
this->erase(curIt);
}
}
@@ -88,22 +141,23 @@ class ThreadLocalCache {
ValueT &get() {
// Check for an already existing instance for this thread.
CacheType &staticCache = getStaticCache();
- std::weak_ptr<ValueT> &threadInstance = staticCache[perInstanceState.get()];
- if (std::shared_ptr<ValueT> value = threadInstance.lock())
+ Observer &threadInstance = staticCache[perInstanceState.get()];
+ if (ValueT *value = *threadInstance.ptr)
return *value;
// Otherwise, create a new instance for this thread.
- llvm::sys::SmartScopedLock<true> threadInstanceLock(
- perInstanceState->instanceMutex);
- perInstanceState->instances.push_back(std::make_unique<ValueT>());
- ValueT *instance = perInstanceState->instances.back().get();
- threadInstance = std::shared_ptr<ValueT>(perInstanceState, instance);
+ {
+ llvm::sys::SmartScopedLock<true> threadInstanceLock(
+ perInstanceState->instanceMutex);
+ perInstanceState->instances.emplace_back(threadInstance);
+ }
+ threadInstance.keepalive = perInstanceState;
// Before returning the new instance, take the chance to clear out any used
// entries in the static map. The cache is only cleared within the same
// thread to remove the need to lock the cache itself.
staticCache.clearExpiredEntries();
- return *instance;
+ return **threadInstance.ptr;
}
ValueT &operator*() { return get(); }
ValueT *operator->() { return &get(); }
|
This is a resubmit of the second version, which I hope fixes the underlying race issue. Someone please carefully read this code and make sure I'm right :( Also restores the removed header to ensure the build doesn't break. |
I'd like to review this carefully, so please wait at least until next week before merging (I have plans for the long week end, not sure if I'll get to it) |
Enjoy your long weekend! ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you able to repro the failures from previous attempts?
I was able to repro them and I confirmed this iteration fixes them. I also checked with TSAN. |
The ThreadLocalCache implementation is used by the MLIRContext (among other things) to try to manage thread contention in the StorageUniquers. There is a bunch of fancy shared pointer/weak pointer setups that basically keeps everything alive across threads at the right time, but a huge bottleneck is the `weak_ptr::lock` call inside the `::get` method. This is because the `lock` method has to hit the atomic refcount several times, and this is bottlenecking performance across many threads. However, all this is doing is checking whether the storage is initialized. Importantly, when the `PerThreadInstance` goes out of scope, it does not remove all of its associated entries from the thread-local hash map (it contains dangling `PerThreadInstance *` keys). The `weak_ptr` also allows the thread local cache to synchronize with the `PerThreadInstance`'s destruction: 1. if `ThreadLocalCache` destructs, the `weak_ptr`s that reference its contained values are immediately invalidated 2. if `CacheType` destructs within a thread, any entries still live are removed from the owning `PerThreadInstance`, and it locks the `weak_ptr` first to ensure it's kept alive long enough for the removal. This PR changes the TLC entries to contain a `shared_ptr<ValueT*>` and a `weak_ptr<PerInstanceState>`. It gives the `PerInstanceState` entries a `weak_ptr<ValueT*>` on top of the `unique_ptr<ValueT>`. This enables `ThreadLocalCache::get` to check if the value is initialized by dereferencing the `shared_ptr<ValueT*>` and check if the contained pointer is null. When `PerInstanceState` destructs, the values inside the TLC are written to nullptr. The TLC uses the `weak_ptr<PerInstanceState>` to satisfy (2). (1) is no longer the case. When `ThreadLocalCache` begins destruction, the `weak_ptr<PerInstanceState>` are invalidated, but not the `shared_ptr<ValueT*>`. This is OK: because the overall object is being destroyed, `::get` cannot get called and because the `shared_ptr<PerInstanceState>` finishes destruction before freeing the pointer, it cannot get reallocated to another `ThreadLocalCache` during destruction. I.e. the values inside the TLC associated with a `PerInstanceState` cannot be read during destruction. The most important thing is to make sure destruction of the TLC doesn't race with the destructor of `PerInstanceState`. Because `PerInstanceState` carries `weak_ptr` references into the TLC, we guarantee to not have any use-after-frees.
llvm#3) (llvm#93315) The ThreadLocalCache implementation is used by the MLIRContext (among other things) to try to manage thread contention in the StorageUniquers. There is a bunch of fancy shared pointer/weak pointer setups that basically keeps everything alive across threads at the right time, but a huge bottleneck is the `weak_ptr::lock` call inside the `::get` method. This is because the `lock` method has to hit the atomic refcount several times, and this is bottlenecking performance across many threads. However, all this is doing is checking whether the storage is initialized. Importantly, when the `PerThreadInstance` goes out of scope, it does not remove all of its associated entries from the thread-local hash map (it contains dangling `PerThreadInstance *` keys). The `weak_ptr` also allows the thread local cache to synchronize with the `PerThreadInstance`'s destruction: 1. if `ThreadLocalCache` destructs, the `weak_ptr`s that reference its contained values are immediately invalidated 2. if `CacheType` destructs within a thread, any entries still live are removed from the owning `PerThreadInstance`, and it locks the `weak_ptr` first to ensure it's kept alive long enough for the removal. This PR changes the TLC entries to contain a `shared_ptr<ValueT*>` and a `weak_ptr<PerInstanceState>`. It gives the `PerInstanceState` entries a `weak_ptr<ValueT*>` on top of the `unique_ptr<ValueT>`. This enables `ThreadLocalCache::get` to check if the value is initialized by dereferencing the `shared_ptr<ValueT*>` and check if the contained pointer is null. When `PerInstanceState` destructs, the values inside the TLC are written to nullptr. The TLC uses the `weak_ptr<PerInstanceState>` to satisfy (2). (1) is no longer the case. When `ThreadLocalCache` begins destruction, the `weak_ptr<PerInstanceState>` are invalidated, but not the `shared_ptr<ValueT*>`. This is OK: because the overall object is being destroyed, `::get` cannot get called and because the `shared_ptr<PerInstanceState>` finishes destruction before freeing the pointer, it cannot get reallocated to another `ThreadLocalCache` during destruction. I.e. the values inside the TLC associated with a `PerInstanceState` cannot be read during destruction. The most important thing is to make sure destruction of the TLC doesn't race with the destructor of `PerInstanceState`. Because `PerInstanceState` carries `weak_ptr` references into the TLC, we guarantee to not have any use-after-frees.
The ThreadLocalCache implementation is used by the MLIRContext (among other things) to try to manage thread contention in the StorageUniquers. There is a bunch of fancy shared pointer/weak pointer setups that basically keeps everything alive across threads at the right time, but a huge bottleneck is the
weak_ptr::lock
call inside the::get
method.This is because the
lock
method has to hit the atomic refcount several times, and this is bottlenecking performance across many threads. However, all this is doing is checking whether the storage is initialized. Importantly, when thePerThreadInstance
goes out of scope, it does not remove all of its associated entries from the thread-local hash map (it contains danglingPerThreadInstance *
keys). Theweak_ptr
also allows the thread local cache to synchronize with thePerThreadInstance
's destruction:ThreadLocalCache
destructs, theweak_ptr
s that reference its contained values are immediately invalidatedCacheType
destructs within a thread, any entries still live are removed from the owningPerThreadInstance
, and it locks theweak_ptr
first to ensure it's kept alive long enough for the removal.This PR changes the TLC entries to contain a
shared_ptr<ValueT*>
and aweak_ptr<PerInstanceState>
. It gives thePerInstanceState
entries aweak_ptr<ValueT*>
on top of theunique_ptr<ValueT>
. This enablesThreadLocalCache::get
to check if the value is initialized by dereferencing theshared_ptr<ValueT*>
and check if the contained pointer is null. WhenPerInstanceState
destructs, the values inside the TLC are written to nullptr. The TLC uses theweak_ptr<PerInstanceState>
to satisfy (2).(1) is no longer the case. When
ThreadLocalCache
begins destruction, theweak_ptr<PerInstanceState>
are invalidated, but not theshared_ptr<ValueT*>
. This is OK: because the overall object is being destroyed,::get
cannot get called and because theshared_ptr<PerInstanceState>
finishes destruction before freeing the pointer, it cannot get reallocated to anotherThreadLocalCache
during destruction. I.e. the values inside the TLC associated with aPerInstanceState
cannot be read during destruction. The most important thing is to make sure destruction of the TLC doesn't race with the destructor ofPerInstanceState
. BecausePerInstanceState
carriesweak_ptr
references into the TLC, we guarantee to not have any use-after-frees.