Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Off CPU profiling #196

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open

Off CPU profiling #196

wants to merge 10 commits into from

Conversation

florianl
Copy link
Contributor

@florianl florianl commented Oct 20, 2024

This is the code that backs #144. It can be reused to add features like requested in #33 and therefore can be an alternative to #192.

The idea that enables off CPU profiling is, that perf event and kprobe eBPF programs are quite similar and can be converted. This allows, with the dynamic rewrite of tail call maps, the reuse of existing eBPF programs and concepts.

This proposal adds the new flag '-off-cpu-threshold' that enables off CPU profiling and attaches the two additional hooks, as discussed in Option B in #144.

UPDATE:
To help review this PR a special version of devfiler was created that can differentiate between on- and off-CPU profiling information.
One can fetch this special devfiler version via:

curl -L -H 'Authorization: 2479ccabdb13dde3' -o 'devfiler-v0.10.0-samplekind.tar.gz' https://upload.elastic.co/d/c5133bf71cac1146d4569bee3f4cc3442266956185b0be70ec9a27568bead7c9

@@ -4,14 +4,6 @@
#include "tracemgmt.h"
#include "stackdeltatypes.h"

#ifndef __USER32_CS
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got moved to tracemgmt.h to make it available to other entry points.

@@ -602,151 +594,6 @@ static ErrorCode unwind_one_frame(u64 pid, u32 frame_idx, struct UnwindState *st
#error unsupported architecture
#endif

// Initialize state from pt_regs
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got moved to tracemgmt.h to make it available to other entry points.

#endif // TESTING_COREDUMP

static inline
int collect_trace(struct pt_regs *ctx, TraceOrigin origin, u32 pid, u32 tid, u64 off_cpu_time) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With moving collect_trace() to tracemgmt.h its arguments changed. origin was added to further keep track what triggered the stack unwinding. pid and tid also got added as argument, as the entry point for off CPU profiling does some filtering on these parameters, so it didn't make sense to call the helper for this information multiple times. And finally off_cpu_time got added as argument, to forward the information for how long a trace was off CPU and store the information in the Trace struct.

func initializeMapsAndPrograms(includeTracers types.IncludedTracers,
kernelSymbols *libpf.SymbolMap, filterErrorFrames bool, mapScaleFactor int,
kernelVersionCheck bool, debugTracer bool, bpfVerifierLogLevel uint32) (
func initializeMapsAndPrograms(kernelSymbols *libpf.SymbolMap, cfg *Config) (
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With adding an additional argument from *Config to initializeMapsAndPrograms() it made sense to just pass *Config rather than every single argument by itself.

tracer/tracer.go Outdated Show resolved Hide resolved
Copy link
Contributor

@umanwizard umanwizard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll give my thought despite not being a maintainer.

Comment on lines +39 to +41
if (bpf_get_prandom_u32()%OFF_CPU_THRESHOLD_MAX > syscfg->off_cpu_threshold) {
return 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the numbers will be statistically valid given this check. Shouldn't the probability of sampling be based on the amount of time spent? (Similar to how e.g. the probability of heap allocation samplers recording a stack is determined by the amount of bytes allocated).

Imagine if the off-CPU time of some process is dominated by one call to read that hangs forever (e.g. due to networking misconfiguration). Unless you get lucky enough that you hit this threshold on that call, you will not see it reflected in the profile.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine if the off-CPU time of some process is dominated by one call to read that hangs forever (e.g. due to networking misconfiguration). Unless you get lucky enough that you hit this threshold on that call, you will not see it reflected in the profile.

I think you describe a good example of the general risk of a sampling approach. In #144 a sampling approach is proposed, as handling every scheduling event in both hooks is a too high risk to overload the system. If the sampling should be done on the off-CPU time, then there is also a management overhead to correctly size the eBPF map that communicates the start of the off-CPU time to the end of the off-CPU time. So yes, using a sampling approach to drop some events in the first hook will miss out on some events.
But similar to regular sampling based on-CPU profiling, if something is misconfigured, then there will be not a single event but multiple ones and so the issue will be visible to the profiler. The same applies to off-CPU profiling, I think. Overall, sampling based profiling provides valuable insights into the hot paths of the system for (expected or unexpected) things that happen often. Sampling is not an approach that catches every event, otherwise it would qualify as debugging or security utility - but sampling does not satisfy this requirements.

Copy link
Contributor

@umanwizard umanwizard Oct 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree that the feature must use sampling. However the strategy used for sampling makes a difference.

E.g., see here where a similar analysis was done for jemalloc: https://github.com/jemalloc/jemalloc/blob/2a693b83d2d1631b6a856d178125e1c47c12add9/doc_internal/PROFILING_INTERNALS.md#L4

The authors concluded that they needed to sample per-byte, rather than per-allocation, because sampling per-allocation increases the variance significantly in a scenario where the allocations can have significantly different sizes.

The corresponding approach here would be to sample per-nanosecond, rather than per-event.

The downside is then you would have to call bpf_ktime_get_ns on the begin and end unconditionally (to know how long the task was parked for and then do your sampling logic), whereas now we skip calling bpf_ktime_get_ns when the sampling doesn't hit. I'm not sure what the overhead of that would be.

I will try to put together a test based on your code to see how much this affects profiling variance in the real world.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the link to the research that got into the sampling strategy for jemalloc.

From my perspective, there are major differences in the purpose of sampling off CPU profiling vs jemalloc that lead to the differences in the sampling strategies.
jemalloc considers small allocations as “cheaper” whereas larger allocations result in more effort. For off CPU profiling the effort to take computing resources from a task and later reassign computing resources are always the same and the time a task is off CPU does not make a difference. So the sampling strategy of jemalloc takes the effort into account while for off CPU profiling the effort is constant.

The general event based sampling approach also comes with the advantage that it lets users predict the direct storage impact that off CPU profiling will have. The storage requirements are linearly correlated to configuration of the CLI flag -off-cpu-threshold. Changing the sampling strategy will make the storage predictions more complex and harder.

We can also not make assumptions about the environment, where off CPU profiling will be used. Will the environment be dominated by short interrupts, with small off CPU times for tasks, or will there be more larger off CPU times, e.g. if spinning disks are used instead of other faster memory solutions. The general event based sampling approach does not make a difference between these two kinds.

Switching to sampling on the off CPU time will also introduce a management overhead. First eBPF maps need to be scaled larger, resulting in a larger memory requirement for the eBPF Profiling agent and secondly there will be more computing overhead. To make a fair sampling decision on the off CPU time we need to make sure, that every tasked that is handled by the scheduler fits into the eBPF maps. The additional computing overhead will be on calling bpf_ktime_get_ns twice to get the off CPU time for each task before making a sampling decision on it, and more eBPF map updates and lookups to forward the off CPU time information from the first to the second hook.

With #144 a general sampling approach was accepted, which is implemented in this current state of code proposal.
Maybe @open-telemetry/profiling-maintainers can provide their views on the different kinds of sampling strategies and provide guidance moving on on this topic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed response. Your points about the differences in constraints between jemalloc and opentelemetry-ebpf-profiler are compelling.

I still think in the future it is worth testing with different sampling strategies to see how much it affects variance in common scenarios and then making a call whether that's worth the downsides (e.g. less predictable storage usage).

But you've convinced me that this is a good enough default that it's worth starting with this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

friendly ping for feedback @felixge - or can this conversation be resolved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

friendly ping for feedback @felixge - or can this conversation be resolved?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although the sampling issue has been discussed before, I must reiterate that the current head-based sampling method has some drawbacks. These include challenges in ensuring the integrity of the sampling chain and the occasional problem of high latency in user localization. A client-based tail sampling approach might be a better-balanced solution. You can refer to this paper for more details: The Benefit of Hindsight: Tracing Edge-Cases in Distributed Systems.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Benefit of Hindsight is definitely an interesting read on retroactive sampling in distributed systems. But still I believe we should proceed with the current head sampling approach in this PR.

The primary focus here is implementing off-CPU profiling as a technique and functionality also enabling other use cases like on-demand profiling data using kprobes, and the current implementation offers practical benefits:

  • Simpler visualization and understanding of the sampling data
  • Lower CPU overhead (personal benchmarks showed retroactive sampling increasing CPU usage by ~8% and memory footprint by ~500MB)
  • More straightforward implementation with less complexity

While retroactive sampling has its merits, defining precise conditions for it in advance is challenging. The current head sampling provides a general approach and additional filtering during data ingestion is still possible.

Rather than blocking this PR on sampling strategy discussions, I suggest we:

Move forward with the current head sampling implementation and consider sampling improvements and experiments as follow-up enhancements. This way we can get the core off-CPU profiling functionality in place while keeping the door open for future sampling strategy optimizations. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I support moving forward with this approach. We can add additional off-CPU strategies in the future. Since every strategy will have its pros and cons depending on the context, it is IMO more important to move on and debate async on different strategies.
Additional future strategies for collecting off-CPU data allows users to decide which one is best for their use case.

tracer/tracer.go Outdated Show resolved Hide resolved
cli_flags.go Outdated Show resolved Hide resolved
@florianl
Copy link
Contributor Author

Rebased proposed changes on current main and resolved merge conflicts.

cli_flags.go Outdated Show resolved Hide resolved
cli_flags.go Outdated Show resolved Hide resolved
libpf/symbol.go Outdated Show resolved Hide resolved
support/ebpf/off_cpu.ebpf.c Outdated Show resolved Hide resolved
support/ebpf/off_cpu.ebpf.c Outdated Show resolved Hide resolved
support/ebpf/tracemgmt.h Outdated Show resolved Hide resolved
support/ebpf/tracemgmt.h Outdated Show resolved Hide resolved
} TraceOrigin;

// OFF_CPU_THRESHOLD_MAX defines the maximum threshold.
#define OFF_CPU_THRESHOLD_MAX 1000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be 100 which is intuitive percentage-wise and also consistent with the probabilistic profiling setting:

Suggested change
#define OFF_CPU_THRESHOLD_MAX 1000
#define OFF_CPU_THRESHOLD_MAX 100

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1000 is chosen over 100 to provide a more fine grained possibility to configure the off-CPU threshold. On a single core system, using regular percentages makes sense, I think. But the more cores are in use, the more data will be generated. And so I think it makes sense to provide the option to reduce the storage and do off-CPU profiling only for a smaller amount than 1% of all scheduler task switches.
Let me know, if 1% of all scheduler task switches is the minimum that should be allowed to configure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to check if hitting on less than 1% of all scheduler task switches provides value given the current sampling strategy by running some test workloads.

support/ebpf/types.h Outdated Show resolved Hide resolved
@florianl

This comment was marked as outdated.

@florianl florianl marked this pull request as ready for review November 1, 2024 12:51
@florianl florianl requested review from a team as code owners November 1, 2024 12:51
@@ -226,20 +240,25 @@ func (r *OTLPReporter) ReportTraceEvent(trace *libpf.Trace, meta *TraceEventMeta
containerID: containerID,
}

if events, exists := (*traceEventsMap)[key]; exists {
traceEventsMap := r.traceEvents.WLock()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to hold the lock while looking up the Cgroupv2 ID and creating the key. So I moved fetching the lock to this later point.

reporter/otlp_reporter.go Outdated Show resolved Hide resolved
@florianl
Copy link
Contributor Author

Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback.

@florianl
Copy link
Contributor Author

Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback.

@florianl
Copy link
Contributor Author

florianl commented Dec 4, 2024

friendly ping @open-telemetry/ebpf-profiler-approvers for feedback.

@florianl

This comment was marked as outdated.

@florianl

This comment was marked as outdated.

@florianl florianl marked this pull request as draft December 10, 2024 13:05
@florianl florianl marked this pull request as ready for review December 11, 2024 11:56
@florianl
Copy link
Contributor Author

friendly ping @open-telemetry/ebpf-profiler-approvers for feedback.

Copy link
Member

@christos68k christos68k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did another pass

cli_flags.go Outdated Show resolved Hide resolved
cli_flags.go Outdated Show resolved Hide resolved
@@ -61,6 +62,11 @@ var (
"If zero, monotonic-realtime clock sync will be performed once, " +
"on agent startup, but not periodically."
sendErrorFramesHelp = "Send error frames (devfiler only, breaks Kibana)"
offCPUThresholdHelp = fmt.Sprintf("If set to a value between 1 and %d will enable "+
"off-cpu profiling: Every time an off-cpu entry point is hit, a random number between "+
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe more informative to be more specific:

Suggested change
"off-cpu profiling: Every time an off-cpu entry point is hit, a random number between "+
"off-cpu profiling: Every time sched_switch is called in the kernel, a random number between "+

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this implementation detail around sched_switch should be part of a help message. As users can not change sched_switch to something different as entry point, I think, this implementation detail should not matter.

host/host.go Outdated Show resolved Hide resolved
host/host.go Outdated Show resolved Hide resolved
tracer/tracer.go Outdated Show resolved Hide resolved
tracer/tracer.go Outdated Show resolved Hide resolved
tracer/tracer.go Outdated Show resolved Hide resolved
tracer/tracer.go Outdated Show resolved Hide resolved
return insNos
}

// loadKProbeUnwinders reuses large parts of loadPerfUnwinders. By default all eBPF programs
Copy link
Member

@christos68k christos68k Dec 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of ugliness here that's not easy to immediately understand. Ideally we'd get rid of it all and have tail calls that lexically (in the source) use perf_progs or kprobe_progs without this sort of dynamic rewriting. This would also get rid of the dummy in eBPF.

florianl and others added 9 commits December 16, 2024 08:47
This is the code that backs
open-telemetry#144.
It can be reused to add features like requested in
open-telemetry#33 and
therefore can be an alternative to
open-telemetry#192.

The idea that enables off CPU profiling is, that perf event and kprobe eBPF
programs are quite similar and can be converted. This allows, with the
dynamic rewrite of tail call maps, the reuse of existing eBPF programs and
concepts.

This proposal adds the new flag '-off-cpu-threshold' that enables off CPU
profiling and attaches the two additional hooks, as discussed in Option B
in open-telemetry#144.

Outstanding work:
- [ ] Handle off CPU traces in the reporter package
- [ ] Handle off CPU traces in the user space side

Signed-off-by: Florian Lehner <dev@der-flo.net>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
@florianl
Copy link
Contributor Author

Needed to force-push to the branch to resolve merge conflicts. Looking forward on getting feedback.

Signed-off-by: Florian Lehner <florian.lehner@elastic.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants