You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Peilen Ye reported an issue ([1]) where for __sync_fetch_and_add(...)
without return value like
__sync_fetch_and_add(&foo, 1);
llvm BPF backend generates locked insn e.g.
lock *(u32 *)(r1 + 0) += r2
If __sync_fetch_and_add(...) returns a value like
res = __sync_fetch_and_add(&foo, 1);
llvm BPF backend generates like
r2 = atomic_fetch_add((u32 *)(r1 + 0), r2)
The above generation of 'lock *(u32 *)(r1 + 0) += r2' caused a problem
in jit since proper barrier is not inserted.
The above discrepancy is due to commit [2] where it tries to maintain
backward compatability since before commit [2],
__sync_fetch_and_add(...) generates lock insn in BPF backend.
Based on discussion in [1], now it is time to fix the above discrepancy
so we can have proper barrier support in jit. This patch made sure that
__sync_fetch_and_add(...) always generates atomic_fetch_add(...) insns.
Now 'lock *(u32 *)(r1 + 0) += r2' can only be generated by inline asm. I
also removed the whole BPFMIChecking.cpp file whose original purpose is
to detect and issue errors if XADD{W,D,W32} may return a value used
subsequently. Since insns XADD{W,D,W32} are all inline asm only now,
such error detection is not needed.
[1]
https://lore.kernel.org/bpf/ZqqiQQWRnz7H93Hc@google.com/T/#mb68d67bc8f39e35a0c3db52468b9de59b79f021f
[2]
286daaf
Co-authored-by: Yonghong Song <yonghong.song@linux.dev>
0 commit comments