-
-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mutex when adding geometry instances to the dirty list in the Forward Clustered renderer #71705
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand the design correctly, RenderForwardClustered
and associates, such as GeometryInstanceForwardClustered
, are not meant to be thread-safe. It's a responsibility of the caller deal with them in a safe manner.
Therefore, in this case it's RendererSceneCull::_scene_cull()
's responsibility to ensure thread safety. Other calls performed by that function are using a spin lock where it's necessary to ensure sync (cull_data.cull->lock.lock();
).
So, my suggestion is to apply in RendererSceneCull::_scene_cull()
the same locking idiom to the various calls to pairing functions, since they deal with shared data (and to any other that can possibly incurr in the same). That's precisely the case with pair_light_instances()
; its call tree involves _mark_dirty()
, where the race condition happen. Since it's up to each specific implementation how data is used in those functions, the idea is to be conservative and lock in case of doubt.
Furthermore, there are some push_back()
calls to various PagedArray
s in InstanceCullResult
that may also need locking, because, as far as I can tell, PagedArray
is not thread-safe. It'd be nice to check with @reduz.
I'd like to make it clear that the kind of thread safety needed here is only between threads in the pool running the same function concurrently. After all are done, data is guaranteed to be in sync (as it is just before the worker threads start), so that's another reason why involved functions (like _mark_dirty()
) don't need general thread safety protection.
Finally (and sorry for so much text), it's a bit sad to have to pay the cost of the spin locks when multiple threads are not being used. It would be good that the spin lock implementation was optimistic (I have to check), or just have a template argument to have two compile-time versions of these functions, with and without locking. But that's unrelated to this PR. Just an idea for you in case you want to explore it.
…ward Clustered renderer
efd988d
to
a804556
Compare
@RandomShaper Thank you very much for taking a look. I agree that the Mutex lock is better moved to
Each thread has its own |
Oh, I overlooked that. I can sleep again. 😃 |
Thanks! |
See #78016, which makes the spinlocks no longer necessary. |
Fixes: #68274
_mark_dirty()
can be called from multiple threads simultaneously when we are using multithreaded culling and there is a light projector in the scene. We need a Mutex both to protectgeometry_instance_surface_alloc
andgeometry_instance_dirty_list
I changed
geometry_instance_lightmap_sh
to be thread safe as there is a situation where it may be called to allocate memory from multiple threads.This PR also changes a few related project settings to
GLOBAL_DEF_RST
@RandomShaper is the
MutexLock lock()
syntax better than what I have here?