-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Negative/large values in topdown perf counters #207
Comments
Upon further delving into the issue, it seems there may be an inner issue when copying the values of some perf structures. In
Setting a watchpoint before and after line 306, where the value of
It seems as if
|
Thanks for reporting and digging into this further. I'm adding Vince Weaver (@deater) to this thread, as he is most familiar with the perf_event core and uncore implementations in PAPI. |
As an alternative, at least as a temporary workaround, you might be able to make some progress using the other topdown events provided through PAPI: $>./papi_command_line -x TOPDOWN:SLOTS TOPDOWN:SLOTS_P TOPDOWN:MEMORY_BOUND_SLOTS TOPDOWN:BR_MISPREDICT_SLOTS TOPDOWN:BACKEND_BOUND_SLOTS TOPDOWN:BAD_SPEC_SLOTS This utility lets you add events from the command line interface to see if they work. Successfully added: TOPDOWN:SLOTS TOPDOWN:SLOTS : 0X395C8DD0 |
so do you get results you expect when using the Linux "perf" tool? it sounds like the top-down events are completely unlike any other events. From what I gather, what you are reading isn't a count, but rather four or eight 8-bit values shoved into a counter read? PAPI probably won't handle this well, as we assume the value being read is a single incrementing integer. Maybe the results you are getting are OK, if you split them up into the component bytes yourself? It might be the rdpmc code interacting poorly with topdown as well. You can try rebuilding PAPI with --disable-perfevent-rdpmc to see if that helps at all |
Thanks for the quick responses!
I had already integrated counters into our library in order to build a TopDown profile with lower level metrics. I am mainly trying to use these newer counters in the architecture to test how accurate our built profiles are compared to these counters. With these, we would be able to discard all the computations on our side and simply read these events as-is.
Through perf I do indeed see values that make sense, both the raw values I posted above and their percentages (correlating the raw values of each category with the "slots" counter to obtain percentages).
As far as I understand, perf allows polling both the "slots" and "metrics" (from the documentation you posted). As you mentioned I also understand that the metrics are four 8-bit values shoved into a read, and then through shifts one can obtain each of the categories.
I have attempted the following:
I am not fully aware of which values PAPI exactly reads with the Otherwise, if PAPI is reading the registers themselves and not the raw values that perf reports, I assume the read values will need bit-handling to turn into valid info. I will attempt rebuilding PAPI with |
Indeed, @deater , disabling rdpmc seems to take care of the issue.
|
(Edit: After having read the mentioned paper further) If it has been identified before, could you give insight on what the underlying issue is? I don't understand the reason behind this code failing to obtain correct values yet. |
yes, with that disabled there will be higher overhead when doing measurements. How much that affects your analysis is going to vary with your workload. I haven't had time to fully analyze what's going on here. Using rdpmc is always a bit fragile because PAPI is one of the few tools that uses the interface and so it is not maintained well by the Linux kernel developers. I'm guessing the problem here might be the sign-extension that is done in the rdpmc code. In theory the linux-kernel provides a "libperf" that is the official code for doing rdpmc reads but we possibly can't include it in PAPI because it's GPL licensed. You can see if you can find that code and see if it is doing things differently. This top-down stuff is complicated, including the fact you're supposed to don non-rdpmc reads about once a second otherwise the results will start to drift. PAPI doesn't really have a mechanism for doing that kind of thing. For now you might be better off just disabling rdpmc until we have time to figure out exactly what is going on. Ideally we'd also modify PAPI to automatically disable things if top-down is being used so that other people don't run into the issues you have, but that's tricky and I can't think of a clean way to do it without hardcoding a lot of special cases in the PAPI code. |
Fair enough, thank you both for the help and detailed answers! |
Regarding this, should you delve into it any further and would want me to test anything else in our system let me know. I will assess the overhead comparing both modes and decide whether it is worth using or revert back to using lower level events to build the categories manually. |
So I've been taking a look at the level one topdown events on a raptorlake machine (TOPDOWN:SLOTS, TOPDOWN:RETIRING_SLOTS, TOPDOWN:BACKEND_BOUND_SLOTS, and TOPDOWN:BAD_SPEC_SLOTS). The raptorlake architecture appears to only support topdown events on the performance cores, so I have been running all of my tests with When rdpmc is enabled, I am able to extract sane values using the methods described in perf's topdown documentation. Be aware that they forget to convert their values into percentages in the pseudocode. On the other hand, when PAPI is configured with Here's a summary of the code I wrote to figure this out:
With rdpmc enabled I (usually) get sane results with the perf documentation's method - more on the 'usually' later:
And with rdpmc disabled I get sane results with the other method, although they are notably swapped around a bit:
I haven't yet looked deep enough into the PAPI source to understand why it is handled so differently depending on if rdpmc is enabled or not, but in summary at least on my machine I can get good results with both rdpmc enabled and disabled, you just need to apply different methods. Which method is preferred? The way the events are handled with rdpmc disabled seems more intuitive. I plan on looking into this further and maybe trying to find a fix. On the rdpmc method only usually working when rdpmc is enabled: 8/100 times I ran the script with rdpmc enabled the results were nonsensical anyway. There is clearly something off with this. |
Alright, I've done more digging and have identified some problems, and some answers/potential solutions Problems:
Potential solution for problem 2:After a lot of debugging, I tracked down the reason why PAPI's values disagree with When
However, right after that in
This means that later in
I guess for small periods of time between I still need to test if removing this line breaks anything, and that may take a while to verify. I also still need to look into why the counter values always disagree with Edit: Good news! The disagreement between |
…n topdown perf counters".
I am trying to read topdown counters that are reported to be available in the system above:
Going by the description of these metrics (
must be used in a group with the other topdown- events with slots as leader
), I created an EventSet withperf::slots
as the first event in the set. However, the values I obtain throughPAPI_read
are negative. I assumed this could be due to an overflow based on the granularity of the workload.Investigating further, I found that the perf utility seems to give reasonable results when polling these counters:
However when attempting to read them through the
papi_command_line
utility, I get similar results compared to when I poll them manually through my library. For instance, trying to read the 5 events at once reports the following:What's strange is if I only poll
perf::slots
, the reported numbers seem to correspond to the lower part of the number reported when polling all of them at the same time:Which leads me to believe perhaps this counter is reading extra data. Moreover, the 4 categories seem to consistently give the same value using the utility, and it seems rather large. Are these reported values supposed to be percentages if PAPI obtains them directly through the processed metrics given by perf? Or are they metrics that have to be combined with masks to obtain the percentages of each category?
Based on the event descriptions (for instance, for the
perf::topdown-retiring
event: "topdown useful slots retiring uops"), I expected integer values such as the ones reported by theperf stat
utility that I could then correlate to theslots
value to obtain the percentages myself.The text was updated successfully, but these errors were encountered: