Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v24.2.x] audit: clamp audit client max parallelism #24148

Conversation

pgellert
Copy link
Contributor

@pgellert pgellert commented Nov 15, 2024

Backport of PR #24137

Fixes #24144

Fixes: CORE-8260

The conflict was on a sanctioning-related test that was present on dev but not on v24.2.x

This fixes a bug in the audit client where if the cluster config value
`kafka_batch_max_bytes` is greater than `audit_client_max_buffer_size`,
the audit client ends up not producing any messages and fills up the
audit log buffers.

The problem is the division here could lead to `max_concurrency=0`. In
debug mode this is caught by an assert inside
`ss::max_concurrent_for_each` but in release mode stdlibrary `assert`s
are disabled and the behaviour ends up being that the background fibre
just blocks waiting for a semaphore unit to be available, but it never
will become available because it was initialized to 0 and no tasks will
ever release any units to it.

(cherry picked from commit add24d9)
@pgellert pgellert added this to the v24.2.x-next milestone Nov 15, 2024
@pgellert pgellert added the kind/backport PRs targeting a stable branch label Nov 15, 2024
@pgellert pgellert self-assigned this Nov 15, 2024
@pgellert pgellert marked this pull request as ready for review November 15, 2024 18:18
@michael-redpanda michael-redpanda merged commit 02513ee into redpanda-data:v24.2.x Nov 16, 2024
20 checks passed
@piyushredpanda piyushredpanda modified the milestones: v24.2.x-next, v24.2.12 Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/backport PRs targeting a stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants