Skip to content

Conversation

@kkdwvd
Copy link
Collaborator

@kkdwvd kkdwvd commented Nov 24, 2025

No description provided.

@kkdwvd kkdwvd requested a review from Copilot November 24, 2025 23:02
Copilot finished reviewing on behalf of kkdwvd November 24, 2025 23:07
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds trylock functionality for resilient spinlocks (rqspinlock) to handle NMI (non-maskable interrupt) contexts where blocking is not allowed. The implementation includes a new res_spin_trylock function and associated macros, with usage in the BPF ringbuffer code to safely handle lock acquisition in NMI contexts.

Key changes:

  • Added res_spin_trylock() function for non-blocking lock acquisition
  • Added raw_res_spin_trylock() and raw_res_spin_trylock_irqsave() macros
  • Modified __bpf_ringbuf_reserve() to use trylock in NMI context instead of regular lock

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
include/asm-generic/rqspinlock.h Implements new trylock function and macros for resilient spinlocks
kernel/bpf/ringbuf.c Uses trylock for NMI contexts where blocking would cause issues

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@kkdwvd kkdwvd force-pushed the rqspinlock/trylock branch 2 times, most recently from 65be478 to 5730b89 Compare November 24, 2025 23:21
A trylock variant for rqspinlock was missing owing to lack of users in
the tree thus far, add one now as it would be needed in subsequent
patches. Mark as __must_check and __always_inline.

This essentially copies queued_spin_trylock, but doesn't depend on it as
rqspinlock compiles down to a TAS when CONFIG_QUEUED_SPINLOCKS=n.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Ritesh reported in [0] occurences of timeout in cases where the AA
heuristic should have reliably triggered. There are a few known cases
where an NMI can interrupt lower contexts in situations where AA
detections will not trigger and lead to timeouts. One of these is an
NMI landing between the cmpxchg in fast path and the creation of the
lock entry. The other can be a chain of NMI waiters which do not cause
AA on their CPUs, but keep losing to other waiters which can queue and
skip ahead of them in the contention chain. More details are available
in the link below.

For the short term, restore the trylock fallback in case of NMIs for
ring buffer. This can be lifted once a more reasonable fallback in case
of NMIs is agreed upon that does not cause long timeouts for the common
case.

  [0]: https://lore.kernel.org/bpf/CAH6OuBTjG+N=+GGwcpOUbeDN563oz4iVcU3rbse68egp9wj9_A@mail.gmail.com

Reported-by: Ritesh Oedayrajsingh Varma <ritesh@superluminal.eu>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
@kkdwvd kkdwvd force-pushed the rqspinlock/trylock branch from 5730b89 to d37ab81 Compare November 24, 2025 23:22
@kkdwvd kkdwvd requested a review from Copilot November 24, 2025 23:24
Copilot finished reviewing on behalf of kkdwvd November 24, 2025 23:28
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +221 to +222
* res_spin_trylock - try to acquire a queued spinlock
* @lock: Pointer to queued spinlock structure
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "try to acquire a queued spinlock" but this function is defined outside the #ifdef CONFIG_QUEUED_SPINLOCKS block and is used for both queued and non-queued spinlock implementations. Consider updating to "try to acquire a resilient spinlock" for accuracy.

Suggested change
* res_spin_trylock - try to acquire a queued spinlock
* @lock: Pointer to queued spinlock structure
* res_spin_trylock - try to acquire a resilient spinlock
* @lock: Pointer to resilient spinlock structure

Copilot uses AI. Check for mistakes.
@kernel-patches-daemon-bpf kernel-patches-daemon-bpf bot force-pushed the bpf-next_base branch 9 times, most recently from 2751ec7 to 886a6a6 Compare November 29, 2025 17:53
@kernel-patches-daemon-bpf
Copy link

Automatically cleaning up stale PR; feel free to reopen if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants