Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EBPF-357] Add GenericMap supporting batch lookup #21738

Merged
merged 50 commits into from
Jan 17, 2024

Conversation

gjulianm
Copy link
Contributor

@gjulianm gjulianm commented Dec 22, 2023

What does this PR do?

Adds a new GenericMap object that allows for more type safety in eBPF maps, and supports batch lookups on iterations

Motivation

  • Creating a generic interface that uses batch operations transparently if available. Users of the function need to explicitly activate batch iteration.
  • Bumping the ebpf library version to include the new BatchCursor API (map: Introduce BatchCursor abstraction cilium/ebpf#1223)
  • Implementing properly the calls to the APIs to avoid extra memory allocations.
  • Defining a standard usage of unsafe.Pointer to avoid marshaling in calls to eBPF maps.

When this PR is approved, I'll create a new one with the changes applied to the rest of the code that uses eBPF maps (it was previously in this PR but now it'll be split)

Additional Notes

  • This PR comes from a previous branch by Bryce (https://github.com/DataDog/datadog-agent/tree/bryce.kahle/generic-map) that I rebased and updated.
  • There are some functions that I am still not sure where to put, such as
    func (g *GenericMap[K, V]) isPerCPU() bool {
    or the GenericMap itself, would appreciate input/advice on that.
  • MapCleaner was using BatchLookup directly, so I replaced it and compared the benchmarks (the changes to MapCleaner and other parts of the code will be done in a separate PR):
  • In order to use the new batch iteration, one should use the IterateWithBatchSize function with a batch size either zero (for a default value) or greater than zero.
goos: linux
goarch: arm64
pkg: github.com/DataDog/datadog-agent/pkg/ebpf
                                        │ 1-old-aea4a23309 │       4-allocfixes-9f005c06eb       │
                                        │      sec/op      │   sec/op     vs base                │
BatchCleaner1000Entries10PerBatch-4           1.463m ± 17%   1.314m ± 6%  -10.21% (p=0.004 n=10)
BatchCleaner1000Entries100PerBatch-4          1.306m ± 22%   1.279m ± 1%   -2.04% (p=0.000 n=10)
BatchCleaner10000Entries100PerBatch-4         13.12m ±  2%   13.07m ± 1%        ~ (p=0.063 n=10)
BatchCleaner10000Entries1000PerBatch-4        13.02m ±  1%   13.13m ± 1%   +0.84% (p=0.015 n=10)
BatchCleaner100000Entries100PerBatch-4        134.9m ± 28%   137.3m ± 2%        ~ (p=0.165 n=10)
BatchCleaner100000Entries1000PerBatch-4       133.2m ±  1%   136.3m ± 2%   +2.33% (p=0.000 n=10)
Cleaner1000Entries-4                          2.097m ±  1%   2.085m ± 0%   -0.59% (p=0.000 n=10)
Cleaner10000Entries-4                         21.20m ±  1%   20.98m ± 0%   -1.03% (p=0.000 n=10)
Cleaner100000Entries-4                        214.6m ±  2%   213.5m ± 0%   -0.53% (p=0.035 n=10)
geomean                                       15.64m         15.46m        -1.15%

                                        │ 1-old-aea4a23309 │       4-allocfixes-9f005c06eb        │
                                        │       B/op       │     B/op      vs base                │
BatchCleaner1000Entries10PerBatch-4           152.6Ki ± 0%   130.1Ki ± 0%  -14.74% (p=0.000 n=10)
BatchCleaner1000Entries100PerBatch-4          149.7Ki ± 0%   131.7Ki ± 0%  -11.99% (p=0.000 n=10)
BatchCleaner10000Entries100PerBatch-4         1.502Mi ± 0%   1.325Mi ± 0%  -11.78% (p=0.000 n=10)
BatchCleaner10000Entries1000PerBatch-4        1.496Mi ± 0%   1.339Mi ± 0%  -10.48% (p=0.000 n=10)
BatchCleaner100000Entries100PerBatch-4        15.66Mi ± 0%   13.88Mi ± 0%  -11.34% (p=0.000 n=10)
BatchCleaner100000Entries1000PerBatch-4       15.47Mi ± 0%   13.90Mi ± 0%  -10.14% (p=0.000 n=10)
Cleaner1000Entries-4                          184.5Ki ± 0%   129.9Ki ± 0%  -29.59% (p=0.000 n=10)
Cleaner10000Entries-4                         1.857Mi ± 0%   1.323Mi ± 0%  -28.76% (p=0.000 n=10)
Cleaner100000Entries-4                        19.22Mi ± 0%   13.88Mi ± 0%  -27.78% (p=0.000 n=10)
geomean                                       1.618Mi        1.330Mi       -17.82%

                                        │ 1-old-aea4a23309 │       4-allocfixes-9f005c06eb       │
                                        │    allocs/op     │  allocs/op   vs base                │
BatchCleaner1000Entries10PerBatch-4            5.739k ± 0%   5.139k ± 0%  -10.46% (p=0.000 n=10)
BatchCleaner1000Entries100PerBatch-4           5.196k ± 0%   5.139k ± 0%   -1.10% (p=0.000 n=10)
BatchCleaner10000Entries100PerBatch-4          55.24k ± 0%   54.65k ± 0%   -1.08% (p=0.000 n=10)
BatchCleaner10000Entries1000PerBatch-4         54.71k ± 0%   54.65k ± 0%   -0.11% (p=0.000 n=10)
BatchCleaner100000Entries100PerBatch-4         555.7k ± 0%   549.7k ± 0%   -1.08% (p=0.000 n=10)
BatchCleaner100000Entries1000PerBatch-4        550.3k ± 0%   549.7k ± 0%   -0.11% (p=0.000 n=10)
Cleaner1000Entries-4                           8.136k ± 0%   5.140k ± 0%  -36.82% (p=0.000 n=10)
Cleaner10000Entries-4                          84.64k ± 0%   54.65k ± 0%  -35.44% (p=0.000 n=10)
Cleaner100000Entries-4                         849.7k ± 0%   549.7k ± 0%  -35.31% (p=0.000 n=10)
geomean                                        63.22k        53.64k       -15.14%

Several things to note here: first, speed is maintained, as expected. Memory usage on single item iterations is reduced, due to an update in the cilium/ebpf library (cilium/ebpf@f0d238d) where they removed allocations in MapIterator.Next. Memory usage on batch operations is reduced too due to the reduction in copies done in BatchLookup.

While in the map_cleaner tests it doesn't look as dramatic (mainly because the bulk of the allocations in the benchmark is done by other functions, see this profiling output), I did some tests and found that in my initial implementation of the iterator (which was practically equal to the one existing in map_cleaner), the number of allocations would grow with each call to BatchLookup. After doing some profiling, I saw that Go makes a copy of the keys/values buffer (allocated once per iterator) to convert them from type K[] to type any. I am not exactly sure why is this behavior happening, but I checked the tests from cilium/ebpf where they make sure that there is no allocation per each call to BatchLookup, and the main difference is that they convert the output buffers to type any. After doing that change (i.e., creating a "view" of the key/value buffers) I managed to bring down the allocations to 8, independent of the batch size/number of batches (controlled by this test).

Possible Drawbacks / Trade-offs

Users might not know that they're iterating with batches, and therefore run into some problems caused by that. I've tried to make it as transparent as possible and handling different cases without interaction, but something could be wrong.

In any case, there's an option to force item-by-item iteration without batches. This is useful in certain cases where the batch lookup calls don't work properly (for example, with the OOMKiller probe, tests were failing with the batch mode but it seemed to be an error from the underlying cilium/ebpf code) and one needs to force iteration without batches.

I've also reviewed the usage of unsafe.Pointer to avoid marshaling and copies as much as possible. I think it's correct based on the usage I've seen in the code. I would have liked to have a unit test that ensures that no marshaling is done but didn't find an immediate way to do it.

Describe how to test/QA your changes

  • Unit tests and benchmarks included

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@gjulianm gjulianm added changelog/no-changelog component/system-probe [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/ebpf-platform qa/done Skip QA week as QA was done before merge and regressions are covered by tests labels Dec 22, 2023
@gjulianm gjulianm added this to the 7.51.0 milestone Dec 22, 2023
@pr-commenter
Copy link

pr-commenter bot commented Dec 22, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: ff46f4a9-1aae-408f-b2ad-d20a4ca0b264
Baseline: aa0ba32
Comparison: ec4621f
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization +1.66 [-4.97, +8.29]
file_tree memory utilization +0.74 [+0.64, +0.85]
idle memory utilization +0.23 [+0.20, +0.26]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization +1.66 [-4.97, +8.29]
file_tree memory utilization +0.74 [+0.64, +0.85]
process_agent_real_time_mode memory utilization +0.25 [+0.22, +0.28]
idle memory utilization +0.23 [+0.20, +0.26]
trace_agent_msgpack ingress throughput +0.05 [+0.03, +0.07]
uds_dogstatsd_to_api ingress throughput +0.00 [-0.03, +0.03]
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.06, +0.06]
trace_agent_json ingress throughput -0.03 [-0.07, -0.00]
process_agent_standard_check memory utilization -0.05 [-0.10, +0.01]
tcp_syslog_to_blackhole ingress throughput -0.25 [-0.32, -0.19]
process_agent_standard_check_with_stats memory utilization -0.47 [-0.52, -0.42]
otel_to_otel_logs ingress throughput -1.66 [-2.39, -0.93]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@gjulianm gjulianm marked this pull request as ready for review December 27, 2023 19:39
@gjulianm gjulianm requested review from a team as code owners December 27, 2023 19:39
Copy link
Contributor

@val06 val06 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

initial review

pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
Copy link
Contributor

@val06 val06 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cred

pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved

// Important! If we pass here g.keys/g.values, Go will create a copy of the slice instance
// and will generate extra allocations. I am not entirely sure why it is doing that.
g.currentBatchSize, g.err = g.m.BatchLookup(&g.cursor, g.keysCopy, g.valuesCopy, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is fine to provide the origianl keys and values, the slice is a very slim struct itself just holding the pointer to the underlying buffer. the reason go allocates it when passed by value is due to the fact the slice is immutable in golang

moreover have you seen this comment in cilium/ebpf for the BatchLookup function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also I am a bit concerned about the next lines in that comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the reason I did the copy first was mostly to have 0 allocations per iteration, Bryce told me to take care with that. It's true that it's not much, but I think this approach gives better performance and memory usage (only one copy is done instead of one per batch). Regarding the first comment, I think the tests cover all the cases, it's true that if you pass a pointer (e.g. &g.keysCopy) it complains about it.

For the second comment, it is indeed a little bit weird. For example, a batch might return less items than requested even if it's not the last one. It's also unintuitive that the return code for the last batch is ErrKeyNotExists, although in my tests I've seen that only happens always for hash maps, array maps don't always return ErrKeyNotExists for the last batch, it just returns 0 elements sometimes.

Reviewing both the code of the library and the usage in code, I think the tests cover the use cases we have in the agent (I just uploaded a test for the Array case).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more referring to:

// Warning: This API is not very safe to use as the kernel implementation for
// batching relies on the user to be aware of subtle details with regarding to
// different map type implementations.

rather than return code

Copy link
Contributor Author

@gjulianm gjulianm Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed the return code is what they were referring to, as that was the only difference I noticed while working with it and looking at the kernel code. However, we could maybe change the implementation so that the default is single-item iteration, so that batch iteration has to be selected explicitly and people using that can test that it works in their use case.

Copy link
Member

@brycekahle brycekahle Jan 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you can do this and avoid the allocations. You can also get rid of keysCopy and valuesCopy.

Suggested change
g.currentBatchSize, g.err = g.m.BatchLookup(&g.cursor, g.keysCopy, g.valuesCopy, nil)
g.currentBatchSize, g.err = g.m.BatchLookup(&g.cursor, any(g.keys), any(g.values), nil)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing that breaks this test that checks that there are not extra allocations per iteration. We could skip that test until cilium/ebpf fixes this behavior in cilium/ebpf#1290? I can also document this in the code, keep an eye on that issue and send a PR when they fix that.

pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
Comment on lines 211 to 224
if itops.BatchSize > int(g.m.MaxEntries()) {
itops.BatchSize = int(g.m.MaxEntries())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should have a warn here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or even better - fail the operation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I wouldn't fail, right now the default is to iterate with batches (although that could change) and depending on the batch and map sizes we could be failing a lot of iterations by default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a warning log (once) should be sufficient
My concern is that the naive user will try to increase the BatchSize field in IteratrOptions and once he passes a value larger than max he does not know, but also does not see any impact

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or at least document the boundaries in the function's documentation (for BatchSize == 0 and BatchSize > MaxEntries)

pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
pkg/ebpf/generic_map.go Outdated Show resolved Hide resolved
@L3n41c L3n41c removed the request for review from a team January 16, 2024 12:12
@gjulianm
Copy link
Contributor Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Jan 17, 2024

🚂 MergeQueue

This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.

you can cancel this operation by commenting your pull request with /merge -c!

@dd-devflow
Copy link

dd-devflow bot commented Jan 17, 2024

🚂 MergeQueue

Added to the queue.

There are 2 builds ahead of this PR! (estimated merge in less than 44m)

you can cancel this operation by commenting your pull request with /merge -c!

@dd-mergequeue dd-mergequeue bot merged commit 6761c2f into main Jan 17, 2024
214 of 228 checks passed
@dd-mergequeue dd-mergequeue bot deleted the guillermo.julian/generic-map branch January 17, 2024 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
7.51.0-drop changelog/no-changelog component/system-probe [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card mergequeue-status: done qa/done Skip QA week as QA was done before merge and regressions are covered by tests team/ebpf-platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants