-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT actual memory usage is higher than reported #52710
Comments
I don't think any of the memory of |
Do we know why is the memory being used by LLVM grows monotonically and where is it going to? In some workloads it reaches 1.6 GiB while |
One chunk is |
DebugObjects are now accounted for, which are usually roughly 10x the size of what Base.jit_total_bytes previously accounted for (though also now compressed, so usually only 3x now) |
@vtjnash: Do you know which commit(s) added that accounting + compression? Is it something we could easily backport to 1.10? (Or if not, maybe it's something RAI could backport to our fork of 1.10) |
#55180. I assume it shouldn't be too complicated to backport |
Cool, thank you! I added the backport labels to that ticket, then. Also, if we do end up decompressing the debug info to read it, will that decompressed data be attributed to |
Yes, it will maintain an updated count |
@vtjnash When do we decompress the debug info? |
It is decompressed when you access it (e.g. printing an error that occurred in that function) |
Using Jemalloc profiler, we found that LLVM in-use memory usage after JITing lots of code is an order of magnitude higher than what
Base.jit_total_bytes()
reports.Here is a simple example:
But jemalloc is reporting 357.2 MiB live heap (still allocated).
Here is the jemalloc profile output in
--text
format. Attached you can find a clearer SVG graph. Both are generated usingjeprof
The text was updated successfully, but these errors were encountered: