-
Notifications
You must be signed in to change notification settings - Fork 151
rqspinlock/trylock #10358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rqspinlock/trylock #10358
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds trylock functionality for resilient spinlocks (rqspinlock) to handle NMI (non-maskable interrupt) contexts where blocking is not allowed. The implementation includes a new res_spin_trylock function and associated macros, with usage in the BPF ringbuffer code to safely handle lock acquisition in NMI contexts.
Key changes:
- Added
res_spin_trylock()function for non-blocking lock acquisition - Added
raw_res_spin_trylock()andraw_res_spin_trylock_irqsave()macros - Modified
__bpf_ringbuf_reserve()to use trylock in NMI context instead of regular lock
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| include/asm-generic/rqspinlock.h | Implements new trylock function and macros for resilient spinlocks |
| kernel/bpf/ringbuf.c | Uses trylock for NMI contexts where blocking would cause issues |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
65be478 to
5730b89
Compare
A trylock variant for rqspinlock was missing owing to lack of users in the tree thus far, add one now as it would be needed in subsequent patches. Mark as __must_check and __always_inline. This essentially copies queued_spin_trylock, but doesn't depend on it as rqspinlock compiles down to a TAS when CONFIG_QUEUED_SPINLOCKS=n. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Ritesh reported in [0] occurences of timeout in cases where the AA heuristic should have reliably triggered. There are a few known cases where an NMI can interrupt lower contexts in situations where AA detections will not trigger and lead to timeouts. One of these is an NMI landing between the cmpxchg in fast path and the creation of the lock entry. The other can be a chain of NMI waiters which do not cause AA on their CPUs, but keep losing to other waiters which can queue and skip ahead of them in the contention chain. More details are available in the link below. For the short term, restore the trylock fallback in case of NMIs for ring buffer. This can be lifted once a more reasonable fallback in case of NMIs is agreed upon that does not cause long timeouts for the common case. [0]: https://lore.kernel.org/bpf/CAH6OuBTjG+N=+GGwcpOUbeDN563oz4iVcU3rbse68egp9wj9_A@mail.gmail.com Reported-by: Ritesh Oedayrajsingh Varma <ritesh@superluminal.eu> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
5730b89 to
d37ab81
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| * res_spin_trylock - try to acquire a queued spinlock | ||
| * @lock: Pointer to queued spinlock structure |
Copilot
AI
Nov 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment says "try to acquire a queued spinlock" but this function is defined outside the #ifdef CONFIG_QUEUED_SPINLOCKS block and is used for both queued and non-queued spinlock implementations. Consider updating to "try to acquire a resilient spinlock" for accuracy.
| * res_spin_trylock - try to acquire a queued spinlock | |
| * @lock: Pointer to queued spinlock structure | |
| * res_spin_trylock - try to acquire a resilient spinlock | |
| * @lock: Pointer to resilient spinlock structure |
2751ec7 to
886a6a6
Compare
|
Automatically cleaning up stale PR; feel free to reopen if needed |
No description provided.