-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sub-interpreters: significant memory leak on shutdown #110411
Comments
I can reproduce this from C as well, without using _xxsubinterpreters. |
I can reproduce as well. I'm using Go with CGO. This is within a Gin route Post handler.
When using a stress tester, the memory grows into gigabytes within a few seconds, consistently growing, and never recovers. |
On windows 11 too... |
I am now using 3.12.1, and it is still leaking. I am using MacOS 14.2 M2 and Linux amd64. I have studied the documentation about using Py_NewInterpreterFromConfig, and I doubt that I am using it correctly. If someone can post here a C example of using Py_NewInterpreterFromConfig, the Python script is a simple |
Sorry, this slipped through the cracks for me. 😞 I expect the underlying cause is the same as for gh-113055 and kind of has the same fix (gh-113412). Basically, the obmalloc implementation has never freed the blocks we get from the system allocator. There's always been a leak! It was just much harder to notice before. The difference between 3.11 and 3.12+ is that each interpreter has its own obmalloc state rather than sharing a process-global one like we used to. Consequently, all the memory each (isolated) subinterpreter uses leaks now. Let's circle back on this once gh-113412 is merged and maybe also gh-113601. |
Bug report
Bug description:
Hi all, congrats on the new Python release, very keen on the direction that sub-interpreters are taking!
I noticed that constructing and destructing sub-interpreters now causes memory to be leaked. This was not a problem in 3.11, but is as of 3.12 and occurs on latest main branch too (as of 8c07137). I figure it must relate to the changes made to split up the interpreter states.
I can consistently replicate the issue using simply:
You'll notice the memory usage climb quite quickly if you look at top/htop.
valgrind/massif think the culprit are interned strings, specifically here:
I had a look around that code but I can't spot any obvious bug. I did verify that the interned strings dict created and destroyed in
clear_interned_dict()
is the same for each sub-interpreter, just in case.I think every interned string is now also automatically immortal. Does that mean it wouldn't be cleaned up even if the refcount goes down to zero?
CPython versions tested on:
3.11, 3.12, CPython main branch
Operating systems tested on:
Linux
The text was updated successfully, but these errors were encountered: