-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some profiles fail scraping #46
Comments
I assume subsequent scrapes are working? |
Nope, all of them fail I'm currently just collecting pprof's from conprof
|
I have also just build the project from master and it not able to scrap itself nor any of my projects built with 1.14.2 Is there currently an issue/fix for this?
|
So basically there is no support for traces in google/pprof#529 and won't be. There is a way to convert traces to pprof using
One option is to convert trace to pprof all 4 types, which would mean roughly copy paste & adapt code https://golang.org/src/cmd/trace/pprof.go here. Another option actually support golang traces and build UI on top of that? (The way go tool does it now). Also question what we can do with cmdline? |
IMO supporting go tool trace is a better option as the go tool trace is quite useful: So cmdline isn't really a profile, it's command line arguments, so maybe we just get rid of it? I.e: |
@povilasv |
you need to add query expression, like |
@povilasv both of those suggestions sound good to me (remove cmdline, embed trace UI). |
@povilasv Thanks for the tip! it's now working :-) |
My proposal to move this forward is:
Still needs some work:
As internally go tool just converts them to pprof and outputs a svg. Maybe I can just use pprof UI instead. My approach would be we add those internal packages to the repo incrementally and expose UIs to users little by little. Thoughts? |
I don't mind copying the internal code, I've started doing this on a branch as well which convert profile nodes into Prometheus remote read.
I would say even if we copy the internal package, let's try contributing this upstream as we're going to want to stay as close to upstream as possible, making copying of internal essentially vendoring. |
Agree, I've created an issue golang/go#38971 if Go maintainers agree, I would try to help them push this forward. |
Nice! I don't even think it needs to necessarily be a non internal package, as long as we don't modify the "manually vendored" one. That said if it was public then even better! (but I would understand that it's quite a large API to give any sort of guarantees on) |
In an attempt to not duplicate work, I opened a WIP for the remote read work I had laying around as that also did the internal manual vendoring: #54 |
Cycling back to the profile type proposal. I think something like type metadata in Prometheus would be more appropriate here. I think I'm ok with a stop gap (maybe analysing the profile scrape path) to heuristically decide what profile it probably is (kind of like Prometheus interpreting |
What about users wanting to query for specific profiles? I can definitely see myself asking conprof "show me memory profiles for jobs conprof,traefik" . Right now we could only do it via paths. But what if my path is different for different types of apps? For example I can can easily configure HeapConfig job to scrape We already have specific types of config for each different profile, so it makes sense to me to have a type also. For Prometheus it's a bit different as you have a metric name and you can query just that metric. Maybe it could even be a profile name?
vs
Thoughts? |
IMHO, I kind of like the profile name / type solution as the queries won't change from app to app. |
I think I like the ergonomics of the profile type/name being the entrypiont for querying. I think we're going to have to adjust the query language/API for this slightly. |
As Add Profile Type got merged, we can now scrape traces, so closing this. I've opened a seperate issue for UI #61 |
*: Split pprof into individual profiles by sample index
Same for cmdline profile:
Might be a consequence of running on arm
The text was updated successfully, but these errors were encountered: