-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Off CPU profiling #196
base: main
Are you sure you want to change the base?
Off CPU profiling #196
Conversation
@@ -4,14 +4,6 @@ | |||
#include "tracemgmt.h" | |||
#include "stackdeltatypes.h" | |||
|
|||
#ifndef __USER32_CS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This got moved to tracemgmt.h
to make it available to other entry points.
@@ -602,151 +594,6 @@ static ErrorCode unwind_one_frame(u64 pid, u32 frame_idx, struct UnwindState *st | |||
#error unsupported architecture | |||
#endif | |||
|
|||
// Initialize state from pt_regs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This got moved to tracemgmt.h
to make it available to other entry points.
support/ebpf/tracemgmt.h
Outdated
#endif // TESTING_COREDUMP | ||
|
||
static inline | ||
int collect_trace(struct pt_regs *ctx, TraceOrigin origin, u32 pid, u32 tid, u64 off_cpu_time) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With moving collect_trace()
to tracemgmt.h
its arguments changed. origin
was added to further keep track what triggered the stack unwinding. pid
and tid
also got added as argument, as the entry point for off CPU profiling does some filtering on these parameters, so it didn't make sense to call the helper for this information multiple times. And finally off_cpu_time
got added as argument, to forward the information for how long a trace was off CPU and store the information in the Trace
struct.
func initializeMapsAndPrograms(includeTracers types.IncludedTracers, | ||
kernelSymbols *libpf.SymbolMap, filterErrorFrames bool, mapScaleFactor int, | ||
kernelVersionCheck bool, debugTracer bool, bpfVerifierLogLevel uint32) ( | ||
func initializeMapsAndPrograms(kernelSymbols *libpf.SymbolMap, cfg *Config) ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With adding an additional argument from *Config
to initializeMapsAndPrograms()
it made sense to just pass *Config
rather than every single argument by itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll give my thought despite not being a maintainer.
if (bpf_get_prandom_u32()%OFF_CPU_THRESHOLD_MAX > syscfg->off_cpu_threshold) { | ||
return 0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the numbers will be statistically valid given this check. Shouldn't the probability of sampling be based on the amount of time spent? (Similar to how e.g. the probability of heap allocation samplers recording a stack is determined by the amount of bytes allocated).
Imagine if the off-CPU time of some process is dominated by one call to read
that hangs forever (e.g. due to networking misconfiguration). Unless you get lucky enough that you hit this threshold on that call, you will not see it reflected in the profile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imagine if the off-CPU time of some process is dominated by one call to read that hangs forever (e.g. due to networking misconfiguration). Unless you get lucky enough that you hit this threshold on that call, you will not see it reflected in the profile.
I think you describe a good example of the general risk of a sampling approach. In #144 a sampling approach is proposed, as handling every scheduling event in both hooks is a too high risk to overload the system. If the sampling should be done on the off-CPU time, then there is also a management overhead to correctly size the eBPF map that communicates the start of the off-CPU time to the end of the off-CPU time. So yes, using a sampling approach to drop some events in the first hook will miss out on some events.
But similar to regular sampling based on-CPU profiling, if something is misconfigured, then there will be not a single event but multiple ones and so the issue will be visible to the profiler. The same applies to off-CPU profiling, I think. Overall, sampling based profiling provides valuable insights into the hot paths of the system for (expected or unexpected) things that happen often. Sampling is not an approach that catches every event, otherwise it would qualify as debugging or security utility - but sampling does not satisfy this requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree that the feature must use sampling. However the strategy used for sampling makes a difference.
E.g., see here where a similar analysis was done for jemalloc: https://github.com/jemalloc/jemalloc/blob/2a693b83d2d1631b6a856d178125e1c47c12add9/doc_internal/PROFILING_INTERNALS.md#L4
The authors concluded that they needed to sample per-byte, rather than per-allocation, because sampling per-allocation increases the variance significantly in a scenario where the allocations can have significantly different sizes.
The corresponding approach here would be to sample per-nanosecond, rather than per-event.
The downside is then you would have to call bpf_ktime_get_ns
on the begin and end unconditionally (to know how long the task was parked for and then do your sampling logic), whereas now we skip calling bpf_ktime_get_ns
when the sampling doesn't hit. I'm not sure what the overhead of that would be.
I will try to put together a test based on your code to see how much this affects profiling variance in the real world.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the link to the research that got into the sampling strategy for jemalloc.
From my perspective, there are major differences in the purpose of sampling off CPU profiling vs jemalloc that lead to the differences in the sampling strategies.
jemalloc considers small allocations as “cheaper” whereas larger allocations result in more effort. For off CPU profiling the effort to take computing resources from a task and later reassign computing resources are always the same and the time a task is off CPU does not make a difference. So the sampling strategy of jemalloc takes the effort into account while for off CPU profiling the effort is constant.
The general event based sampling approach also comes with the advantage that it lets users predict the direct storage impact that off CPU profiling will have. The storage requirements are linearly correlated to configuration of the CLI flag -off-cpu-threshold
. Changing the sampling strategy will make the storage predictions more complex and harder.
We can also not make assumptions about the environment, where off CPU profiling will be used. Will the environment be dominated by short interrupts, with small off CPU times for tasks, or will there be more larger off CPU times, e.g. if spinning disks are used instead of other faster memory solutions. The general event based sampling approach does not make a difference between these two kinds.
Switching to sampling on the off CPU time will also introduce a management overhead. First eBPF maps need to be scaled larger, resulting in a larger memory requirement for the eBPF Profiling agent and secondly there will be more computing overhead. To make a fair sampling decision on the off CPU time we need to make sure, that every tasked that is handled by the scheduler fits into the eBPF maps. The additional computing overhead will be on calling bpf_ktime_get_ns
twice to get the off CPU time for each task before making a sampling decision on it, and more eBPF map updates and lookups to forward the off CPU time information from the first to the second hook.
With #144 a general sampling approach was accepted, which is implemented in this current state of code proposal.
Maybe @open-telemetry/profiling-maintainers can provide their views on the different kinds of sampling strategies and provide guidance moving on on this topic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed response. Your points about the differences in constraints between jemalloc and opentelemetry-ebpf-profiler are compelling.
I still think in the future it is worth testing with different sampling strategies to see how much it affects variance in common scenarios and then making a call whether that's worth the downsides (e.g. less predictable storage usage).
But you've convinced me that this is a good enough default that it's worth starting with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
friendly ping for feedback @felixge - or can this conversation be resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
friendly ping for feedback @felixge - or can this conversation be resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the sampling issue has been discussed before, I must reiterate that the current head-based sampling method has some drawbacks. These include challenges in ensuring the integrity of the sampling chain and the occasional problem of high latency in user localization. A client-based tail sampling approach might be a better-balanced solution. You can refer to this paper for more details: The Benefit of Hindsight: Tracing Edge-Cases in Distributed Systems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Benefit of Hindsight is definitely an interesting read on retroactive sampling in distributed systems. But still I believe we should proceed with the current head sampling approach in this PR.
The primary focus here is implementing off-CPU profiling as a technique and functionality also enabling other use cases like on-demand profiling data using kprobes, and the current implementation offers practical benefits:
- Simpler visualization and understanding of the sampling data
- Lower CPU overhead (personal benchmarks showed retroactive sampling increasing CPU usage by ~8% and memory footprint by ~500MB)
- More straightforward implementation with less complexity
While retroactive sampling has its merits, defining precise conditions for it in advance is challenging. The current head sampling provides a general approach and additional filtering during data ingestion is still possible.
Rather than blocking this PR on sampling strategy discussions, I suggest we:
Move forward with the current head sampling implementation and consider sampling improvements and experiments as follow-up enhancements. This way we can get the core off-CPU profiling functionality in place while keeping the door open for future sampling strategy optimizations. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I support moving forward with this approach. We can add additional off-CPU strategies in the future. Since every strategy will have its pros and cons depending on the context, it is IMO more important to move on and debate async on different strategies.
Additional future strategies for collecting off-CPU data allows users to decide which one is best for their use case.
Rebased proposed changes on current main and resolved merge conflicts. |
} TraceOrigin; | ||
|
||
// OFF_CPU_THRESHOLD_MAX defines the maximum threshold. | ||
#define OFF_CPU_THRESHOLD_MAX 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be 100 which is intuitive percentage-wise and also consistent with the probabilistic profiling setting:
#define OFF_CPU_THRESHOLD_MAX 1000 | |
#define OFF_CPU_THRESHOLD_MAX 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1000 is chosen over 100 to provide a more fine grained possibility to configure the off-CPU threshold. On a single core system, using regular percentages makes sense, I think. But the more cores are in use, the more data will be generated. And so I think it makes sense to provide the option to reduce the storage and do off-CPU profiling only for a smaller amount than 1% of all scheduler task switches.
Let me know, if 1% of all scheduler task switches is the minimum that should be allowed to configure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to check if hitting on less than 1% of all scheduler task switches provides value given the current sampling strategy by running some test workloads.
This comment was marked as outdated.
This comment was marked as outdated.
reporter/otlp_reporter.go
Outdated
@@ -226,20 +240,25 @@ func (r *OTLPReporter) ReportTraceEvent(trace *libpf.Trace, meta *TraceEventMeta | |||
containerID: containerID, | |||
} | |||
|
|||
if events, exists := (*traceEventsMap)[key]; exists { | |||
traceEventsMap := r.traceEvents.WLock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to hold the lock while looking up the Cgroupv2 ID and creating the key. So I moved fetching the lock to this later point.
Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback. |
Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback. |
friendly ping @open-telemetry/ebpf-profiler-approvers for feedback. |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
friendly ping @open-telemetry/ebpf-profiler-approvers for feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did another pass
@@ -61,6 +62,11 @@ var ( | |||
"If zero, monotonic-realtime clock sync will be performed once, " + | |||
"on agent startup, but not periodically." | |||
sendErrorFramesHelp = "Send error frames (devfiler only, breaks Kibana)" | |||
offCPUThresholdHelp = fmt.Sprintf("If set to a value between 1 and %d will enable "+ | |||
"off-cpu profiling: Every time an off-cpu entry point is hit, a random number between "+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe more informative to be more specific:
"off-cpu profiling: Every time an off-cpu entry point is hit, a random number between "+ | |
"off-cpu profiling: Every time sched_switch is called in the kernel, a random number between "+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this implementation detail around sched_switch
should be part of a help message. As users can not change sched_switch
to something different as entry point, I think, this implementation detail should not matter.
return insNos | ||
} | ||
|
||
// loadKProbeUnwinders reuses large parts of loadPerfUnwinders. By default all eBPF programs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of ugliness here that's not easy to immediately understand. Ideally we'd get rid of it all and have tail calls that lexically (in the source) use perf_progs
or kprobe_progs
without this sort of dynamic rewriting. This would also get rid of the dummy in eBPF.
This is the code that backs open-telemetry#144. It can be reused to add features like requested in open-telemetry#33 and therefore can be an alternative to open-telemetry#192. The idea that enables off CPU profiling is, that perf event and kprobe eBPF programs are quite similar and can be converted. This allows, with the dynamic rewrite of tail call maps, the reuse of existing eBPF programs and concepts. This proposal adds the new flag '-off-cpu-threshold' that enables off CPU profiling and attaches the two additional hooks, as discussed in Option B in open-telemetry#144. Outstanding work: - [ ] Handle off CPU traces in the reporter package - [ ] Handle off CPU traces in the user space side Signed-off-by: Florian Lehner <dev@der-flo.net>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback. |
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
This is the code that backs #144. It can be reused to add features like requested in #33 and therefore can be an alternative to #192.
The idea that enables off CPU profiling is, that perf event and kprobe eBPF programs are quite similar and can be converted. This allows, with the dynamic rewrite of tail call maps, the reuse of existing eBPF programs and concepts.
This proposal adds the new flag '-off-cpu-threshold' that enables off CPU profiling and attaches the two additional hooks, as discussed in Option B in #144.
UPDATE:
To help review this PR a special version of devfiler was created that can differentiate between on- and off-CPU profiling information.
One can fetch this special devfiler version via: