-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lavapipe memory leak reproduction #306
Conversation
So it seems to be the device ... |
Making this even more minimal. All I'm doing is requesting a device. Nothing else. import gc
import psutil
import wgpu.backends.rs
import wgpu
p = psutil.Process()
def print_mem_usage(i):
megs = p.memory_info().rss / 1024**2
print(f"memory usage (round: {i}): {megs:.3f} MB")
if __name__ == "__main__":
print_mem_usage(0)
for i in range(10):
adapter = wgpu.request_adapter(canvas=None, power_preference="high-performance")
device = adapter.request_device()
gc.collect()
print_mem_usage(i + 1) Output:
If I only request an adapter, there is no leakage. |
I think this is a symptom of various bugs in wgpu/wgpu-native, actually.
You can find a lot more memory leak bugs actually, which is ironic since it is all rust. :) We may get a better experience once we upgrade to the latest version of wgpu-native. |
I guess this counts as good news! 🍰 |
I'd be curious to see if we can also reproduce with the same code in Rust... to see if the problem is with our wrappers or not. |
Renamed that to `delayed_dropper, because it now also drops adapters (and maybe more at some point). |
Current output:
|
I'm abandoning this. Someone needs to try and reproduce this on the wgpu-native layer. I don't think our wrappers are the cause of this memory pattern. |
Latest status:
Anything to add @Korijn ? |
On my machine (Windows 11) this happens when I run 2000 iters:
|
That's not good ... I can indeed reproduce that. With both |
By the way, crashing right before 64 seems like too much of a coincidence. |
Seeing the same, consistently. Should help trace the cause ... I made an issue to track mem leaks in general: #353 |
LOL |
This is a follow-up to pygfx/pygfx#362
I wanted to know if the lavapipe memory leak was hiding in pygfx's statefulness or not, and the quickest way to be sure of that was to try and reproduce the issue in this repo.
Round 0 means before doing anything.
Output on WSL2 using lavapipe:
Reusing the
canvas
object only:Reusing the
device
object only (socanvas
does not seem to be leaking):Reusing both the
device
andcanvas
objects: