-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Garbage profiler #42658
Garbage profiler #42658
Conversation
don't seem to really be doing much
You have trailing whitespace on the following lines:
|
Some kind of memory profiling would be awesome. I assume you've seen this alternative, but if not: #33467. If we had that, would you still want this? |
(I think this is still in Draft, so i've removed me and Jameson as reviewers. Let me know when you're ready for review Pete, and i'll add us back) |
FYI, you will need to fix the |
@timholy good question. This seemed like the fastest way to get a basic handle on our garbage problem, but it does immediately beg the question of where the allocations come from. So, I would still love to get a memory profiler with stacks! |
Folded into #42768 |
## Overview Record the type and stack of every allocation (or only at a given sample interval), and return as Julia objects. Alternate approach to existing alloc profiler PR: #33467 Complementary to garbage profiler PR: #42658 (maybe there's some nice way to meld them) This may be reinventing the wheel from #33467, but I'm not sure why that one needs stuff like LLVM passes. I mimicked some stuff from it, but this was my attempt to get something up and running. Could easily be missing stuff. ## Usage: ```julia using Profile.Allocs res = Allocs.@Profile sample_rate=0.001 my_func() prof = Allocs.fetch() # do something with `prof` ``` See also: JuliaPerf/PProf.jl#46 for support for visualizing these. Co-authored-by: Nathan Daly <nhdaly@gmail.com>
## Overview Record the type and stack of every allocation (or only at a given sample interval), and return as Julia objects. Alternate approach to existing alloc profiler PR: JuliaLang#33467 Complementary to garbage profiler PR: JuliaLang#42658 (maybe there's some nice way to meld them) This may be reinventing the wheel from JuliaLang#33467, but I'm not sure why that one needs stuff like LLVM passes. I mimicked some stuff from it, but this was my attempt to get something up and running. Could easily be missing stuff. ## Usage: ```julia using Profile.Allocs res = Allocs.@Profile sample_rate=0.001 my_func() prof = Allocs.fetch() # do something with `prof` ``` See also: JuliaPerf/PProf.jl#46 for support for visualizing these. Co-authored-by: Nathan Daly <nhdaly@gmail.com>
## Overview Record the type and stack of every allocation (or only at a given sample interval), and return as Julia objects. Alternate approach to existing alloc profiler PR: JuliaLang#33467 Complementary to garbage profiler PR: JuliaLang#42658 (maybe there's some nice way to meld them) This may be reinventing the wheel from JuliaLang#33467, but I'm not sure why that one needs stuff like LLVM passes. I mimicked some stuff from it, but this was my attempt to get something up and running. Could easily be missing stuff. ## Usage: ```julia using Profile.Allocs res = Allocs.@Profile sample_rate=0.001 my_func() prof = Allocs.fetch() # do something with `prof` ``` See also: JuliaPerf/PProf.jl#46 for support for visualizing these. Co-authored-by: Nathan Daly <nhdaly@gmail.com>
Ever seen a
@time
printout like this:…and wondered what the types are of all those objects which are being allocated and collected? This PR is for you…
It generates a report of how many objects of each type are being freed during each garbage collection.
Usage
Output
A CSV file with these columns:
gc_epoch
: an integer that starts at 0 and increases each time the garbage collector runstype
: type of julia object being freednum_freed
number of that type freed during that GC epoche.g.:
Output is streamed to this file as the program runs.
To aggregate this over all GC epochs, I've been using
csvsql
to run do a group by:Notes
issues / improvements
Future work
Array{UInt8, 1}
)