Fix errors when freeing GPUParticles #82431
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The fundamental reason for the first linked bug seems to be that when a GPUParticles2D or GPUParticles3D is freed and destroyed, they eventually call
ParticlesStorage::particles_free()
. This in turn has a somewhat unusual call toParticlesStorage::update_particles()
. So deleting a single particle system can force update all other currently processing particles systems. I looked around and I think the main reason for this call is that this "update all" has a side effect of removing the currently deleted particle system from theparticle_update_list
so that it doesn't point to invalid memory after the particle system is freed soon after. However, there might be more subtler reason too, but I didn't notice anything suspicious. Indirectly sending theDependency::DEPENDENCY_CHANGED_AABB
is in theory something that could matter, but I don't think that is needed as the particle system is being deleted anyway. I replaced the simple update list management with aSelfList
, so there is now a more standard mechanism for tracking updates and we avoid the wasteful updates.Now, this update call can also invalidate some particle system material uniform sets. Normally, this doesn't seem to be a problem because they will be re-created when needed, but on some occasions this can happen during a particle system destructor. When this matters depends on many variables, including active particle system count, Material "Local to Scene" flag and even where and when the
queue_free()
is called as the event flushing and destructor call can happen in several places during the main loop iteration. A different particle system might be updated immediately, but if the material uniform set was invalidated (here is where that "Local to Scene" flag seems to matter) and has not been re-created yet the update will cause the bug and error message with potential for visual glitches. Every time you see that error message in the MRP, the debugger stack trace will point to a particle system queued destructor being run and callingupdate_particles()
.Another hint that calling
update_particles()
at random times is not a good idea is the comment inRenderingServerDefault::draw()
, suggesting that the update order is important:godot/servers/rendering/rendering_server_default.cpp
Line 87 in fba341c
The exact circumstances where this bug can happen are pretty complex and I don't claim I understand them all, but simply avoiding these potentially troublesome updates in
ParticlesStorage::particles_free()
seems to fix the fundamental problem. Another alternative one-liner fix is to simply change the branch condition inParticlesStorage::_particles_process()
to
if (!m || m->uniform_set.is_null()) {
However, I think changing that might just hide the issue and could still cause some visual glitches. Still, in case more bug reports like these appear in the future, adding that extra check might be useful.
Again, this is a pretty tricky issue, so getting another opinion of this will be useful.