-
Notifications
You must be signed in to change notification settings - Fork 152
Add tnum_scast helper #10365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bpf-next_base
Are you sure you want to change the base?
Add tnum_scast helper #10365
Conversation
|
Upstream branch: 590699d |
AI reviewed your patch. Please fix the bug or email reply why it's not a bug. In-Reply-To-Subject: |
AI reviewed your patch. Please fix the bug or email reply why it's not a bug. In-Reply-To-Subject: |
|
Forwarding comment 3575602262 via email |
|
Forwarding comment 3575618805 via email |
d52edef to
7e9ef86
Compare
|
Upstream branch: f2cb066 |
32bf5ba to
0f16730
Compare
7e9ef86 to
1d9a490
Compare
|
Upstream branch: 8c868a3 |
0f16730 to
79fc9af
Compare
1d9a490 to
e4c5f93
Compare
|
Upstream branch: 5262cb2 |
79fc9af to
764e972
Compare
e4c5f93 to
233a075
Compare
|
Upstream branch: 688b745 |
764e972 to
9cbaf8a
Compare
233a075 to
fbe4d04
Compare
|
Upstream branch: bd5bdd2 |
9cbaf8a to
dbeec59
Compare
fbe4d04 to
6abef8e
Compare
|
Upstream branch: 34235a3 |
dbeec59 to
664c59f
Compare
6abef8e to
2751ec7
Compare
|
Upstream branch: 85bdeeb |
664c59f to
c4e2187
Compare
2751ec7 to
886a6a6
Compare
This patch introduces a new helper function - tnum_scast(), which sign-extends a tnum from a smaller integer size to the full 64-bit bpf register range. This is achieved by utilizing the native sign-extension behavior of signed 64-bit integers. By casting the value and mask to s64, shifting left to align the target sign bit with the 64-bit MSB, and then performing an arithmetic right shift, the sign bit is automatically propagated to the upper bits. For the mask, this works because if the sign bit is unknown (1), the arithmetic shift propagates 1s (making upper bits unknonw). If known (0), it propagates 0s (making upper bits known). a) When the sign bit is known: Assume a tnum with value = 0xFF, mask = 0x00, size = 1, which corresponds to an 8-bit subregister of value 0xFF (-1 in 8 bits). s = 64 - 8 = 56 value = ((s64)0xFF << 56) >> 56; // 0xFF...FF (-1) mask = ((s64)0x00 << 56) >> 56; // 0x00...00 Because the sign bit is known to be 1, we sign-extend with 1s. The resulting tnum is (0xFFFFFFFFFFFFFFFF, 0x0000000000000000). b) When the sign bit is unknown: Assume a tnum with value = 0x7F, mask = 0x80, size = 1. s = 56 value = ((s64)0x7F << 56) >> 56; // 0x00...7F mask = ((s64)0x80 << 56) >> 56; // 0xFF...80 The lower 8 bits can be 0x7F or 0xFF. The mask sign bit was 1 (unknown), so the arithmetic shift propagated 1s, making all higher 56 bits unknown. In 64-bit form, this tnum correctly represents the range from 0x000000000000007F (+127) to 0xFFFFFFFFFFFFFFFF (-1). Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
This patch refactors the verifier's sign-extension logic for narrow register values to use the new tnum_scast helper. Previously, coerce_reg_to_size_sx and coerce_subreg_to_size_sx employed manual logic to determine bounds, sometimes falling back to loose ranges when sign bits were uncertain. We simplify said logic by delegating the bounds calculation to tnum_scast + the existing bounds synchronization logic: 1. The register's tnum is updated via tnum_scast() 2. The signed bounds (smin/smax) are reset to the maximum theoretical range for the target size. 3. The unsigned bounds are reset to the full register width. 4. __update_reg_bounds() is called. By invoking __update_reg_bounds(), the verifier automatically calculates the intersection between the theoretical signed range and the bitwise info in reg->var_off. This ensures bounds are as tight as possible without requiring custom logic in the coercion functions. This commit also removes set_sext64_default_val() and set_sext32_default_val() as they are no longer used. Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
This patch adds a new test cases to validate the improved register bounds
tracking logic.
We perform the sequence:
call bpf_get_prandom_u32;
r1 &= 0x100;
r1 = (s8)r1;
After the bitwise AND, `r1` is either 0 or 256 (0x100).
If 0: The lower 8 bits are 0.
If 256: The bit at index 8 is set, but the lower 8 bits are 0.
Since the cast to s8 only considers bits 0-7, the set bit at index 8 is
truncated. In both cases, the sign bit (bit 7) is 0, so the
result is exactly 0.
With the coercion logic before this series:
1: (bf) r1 = r0
; R0=scalar(id=1) R1=scalar(id=1)
2: (57) r1 &= 256
; R1=scalar(...,var_off=(0x0; 0x100))
3: (bf) r1 = (s8)r1
; R1=scalar(smin=smin32=-128,smax=smax32=127)
With our changes:
1: (bf) r1 = r0
; R0=scalar(id=1) R1=scalar(id=1)
2: (57) r1 &= 256
; R1=scalar(...,var_off=(0x0; 0x100))
3: (bf) r1 = (s8)r1
; R1=0
Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
|
Upstream branch: ff34657 |
c4e2187 to
311cd51
Compare
Pull request for series with
subject: Add tnum_scast helper
version: 1
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=1027352