-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpf: Refuse to mount bpffs on the same mount point multiple times #69
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We monitored an unexpected behavoir that bpffs is mounted on a same mount point lots of times on some of our production envrionments. For example, $ mount -t bpf bpffs /sys/fs/bpf bpf rw,relatime 0 0 bpffs /sys/fs/bpf bpf rw,relatime 0 0 bpffs /sys/fs/bpf bpf rw,relatime 0 0 bpffs /sys/fs/bpf bpf rw,relatime 0 0 ... That was casued by a buggy user script which didn't check the mount point correctly before mounting bpffs. But it also drives us to think more about if it is okay to allow mounting bpffs on the same mount point multiple times. After investigation we get the conclusion that it is bad to allow that behavior, because it can cause unexpected issues, for example it can break bpftool, which depends on the mount point to get the pinned files. Below is the test case wrt bpftool. First, let's mount bpffs on /var/run/ltcp/bpf multiple times. $ mount -t bpf bpffs on /run/ltcp/bpf type bpf (rw,relatime) bpffs on /run/ltcp/bpf type bpf (rw,relatime) bpffs on /run/ltcp/bpf type bpf (rw,relatime) After pinning some bpf progs on this mount point, let's check the pinned files with bpftool, $ bpftool prog list -f 87: sock_ops name bpf_sockmap tag a04f5eef06a7f555 gpl loaded_at 2022-02-23T16:27:38+0800 uid 0 xlated 16B jited 15B memlock 4096B pinned /run/ltcp/bpf/bpf_sockmap pinned /run/ltcp/bpf/bpf_sockmap pinned /run/ltcp/bpf/bpf_sockmap btf_id 243 89: sk_msg name bpf_redir_proxy tag 57cd311f2e27366b gpl loaded_at 2022-02-23T16:27:38+0800 uid 0 xlated 16B jited 18B memlock 4096B pinned /run/ltcp/bpf/bpf_redir_proxy pinned /run/ltcp/bpf/bpf_redir_proxy pinned /run/ltcp/bpf/bpf_redir_proxy btf_id 244 The same pinned file will be showed multiple times. Finnally after mounting bpffs on the same mount point again, we can't get the pinnned files via bpftool, $ bpftool prog list -f 87: sock_ops name bpf_sockmap tag a04f5eef06a7f555 gpl loaded_at 2022-02-23T16:27:38+0800 uid 0 xlated 16B jited 15B memlock 4096B btf_id 243 89: sk_msg name bpf_redir_proxy tag 57cd311f2e27366b gpl loaded_at 2022-02-23T16:27:38+0800 uid 0 xlated 16B jited 18B memlock 4096B btf_id 244 We should better refuse to mount bpffs on the same mount point. Before making this change, I also checked why it is allowed in the first place. The related commits are commit e27f4a9 ("bpf: Use mount_nodev not mount_ns to mount the bpf filesystem") and commit b219775 ("bpf: add support for persistent maps/progs"). Unfortunately they didn't explain why it is allowed. But there should be no use case which requires to mount bpffs on a same mount point multiple times, so let's just refuse it. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Daniel Borkmann <daniel@iogearbox.net>
Master branch: b4f7278 |
At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=617159 irrelevant now. Closing PR. |
kernel-patches-bot
pushed a commit
that referenced
this pull request
Jan 28, 2023
Commit fb541ca ("md: remove lock_bdev / unlock_bdev") removes wrappers for blkdev_get/blkdev_put. However, the uninitialized local static variable of pointer type 'claim_rdev' in md_import_device() is NULL, which leads to the following warning call trace: WARNING: CPU: 22 PID: 1037 at block/bdev.c:577 bd_prepare_to_claim+0x131/0x150 CPU: 22 PID: 1037 Comm: mdadm Not tainted 6.2.0-rc3+ #69 .. RIP: 0010:bd_prepare_to_claim+0x131/0x150 .. Call Trace: <TASK> ? _raw_spin_unlock+0x15/0x30 ? iput+0x6a/0x220 blkdev_get_by_dev.part.0+0x4b/0x300 md_import_device+0x126/0x1d0 new_dev_store+0x184/0x240 md_attr_store+0x80/0xf0 kernfs_fop_write_iter+0x128/0x1c0 vfs_write+0x2be/0x3c0 ksys_write+0x5f/0xe0 do_syscall_64+0x38/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc It turns out the md device cannot be used: md: could not open device unknown-block(259,0). md: md127 stopped. Fix the issue by declaring the local static variable of struct type and passing the pointer of the variable to blkdev_get_by_dev(). Fixes: fb541ca ("md: remove lock_bdev / unlock_bdev") Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Song Liu <song@kernel.org>
kernel-patches-daemon-bpf-rc bot
pushed a commit
that referenced
this pull request
Mar 13, 2024
Commit 3fcb9d1 ("io_uring/sqpoll: statistics of the true utilization of sq threads"), currently in Jens for-next branch, peeks at io_sq_data->thread to report utilization statistics. But, If io_uring_show_fdinfo races with sqpoll terminating, even though we hold the ctx lock, sqd->thread might be NULL and we hit the Oops below. Note that we could technically just protect the getrusage() call and the sq total/work time calculations. But showing some sq information (pid/cpu) and not other information (utilization) is more confusing than not reporting anything, IMO. So let's hide it all if we happen to race with a dying sqpoll. This can be triggered consistently in my vm setup running sqpoll-cancel-hang.t in a loop. BUG: kernel NULL pointer dereference, address: 00000000000007b0 PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 16587 Comm: systemd-coredum Not tainted 6.8.0-rc3-g3fcb9d17206e-dirty #69 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022 RIP: 0010:getrusage+0x21/0x3e0 Code: 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 48 89 d1 48 89 e5 41 57 41 56 41 55 41 54 49 89 fe 41 52 53 48 89 d3 48 83 ec 30 <4c> 8b a7 b0 07 00 00 48 8d 7a 08 65 48 8b 04 25 28 00 00 00 48 89 RSP: 0018:ffffa166c671bb80 EFLAGS: 00010282 RAX: 00000000000040ca RBX: ffffa166c671bc60 RCX: ffffa166c671bc60 RDX: ffffa166c671bc60 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffffa166c671bbe0 R08: ffff9448cc3930c0 R09: 0000000000000000 R10: ffffa166c671bd50 R11: ffffffff9ee89260 R12: 0000000000000000 R13: ffff9448ce099480 R14: 0000000000000000 R15: ffff9448cff5b000 FS: 00007f786e225900(0000) GS:ffff94493bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000007b0 CR3: 000000010d39c000 CR4: 0000000000750ef0 PKRU: 55555554 Call Trace: <TASK> ? __die_body+0x1a/0x60 ? page_fault_oops+0x154/0x440 ? srso_alias_return_thunk+0x5/0xfbef5 ? do_user_addr_fault+0x174/0x7c0 ? srso_alias_return_thunk+0x5/0xfbef5 ? exc_page_fault+0x63/0x140 ? asm_exc_page_fault+0x22/0x30 ? getrusage+0x21/0x3e0 ? seq_printf+0x4e/0x70 io_uring_show_fdinfo+0x9db/0xa10 ? srso_alias_return_thunk+0x5/0xfbef5 ? vsnprintf+0x101/0x4d0 ? srso_alias_return_thunk+0x5/0xfbef5 ? seq_vprintf+0x34/0x50 ? srso_alias_return_thunk+0x5/0xfbef5 ? seq_printf+0x4e/0x70 ? seq_show+0x16b/0x1d0 ? __pfx_io_uring_show_fdinfo+0x10/0x10 seq_show+0x16b/0x1d0 seq_read_iter+0xd7/0x440 seq_read+0x102/0x140 vfs_read+0xae/0x320 ? srso_alias_return_thunk+0x5/0xfbef5 ? __do_sys_newfstat+0x35/0x60 ksys_read+0xa5/0xe0 do_syscall_64+0x50/0x110 entry_SYSCALL_64_after_hwframe+0x6e/0x76 RIP: 0033:0x7f786ec1db4d Code: e8 46 e3 01 00 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 80 3d d9 ce 0e 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 48 83 ec RSP: 002b:00007ffcb361a4b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 000055a4c8fe42f0 RCX: 00007f786ec1db4d RDX: 0000000000000400 RSI: 000055a4c8fe48a0 RDI: 0000000000000006 RBP: 00007f786ecfb0b0 R08: 00007f786ecfb2a8 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f786ecfaf60 R13: 000055a4c8fe42f0 R14: 0000000000000000 R15: 00007ffcb361a628 </TASK> Modules linked in: CR2: 00000000000007b0 ---[ end trace 0000000000000000 ]--- RIP: 0010:getrusage+0x21/0x3e0 Code: 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 48 89 d1 48 89 e5 41 57 41 56 41 55 41 54 49 89 fe 41 52 53 48 89 d3 48 83 ec 30 <4c> 8b a7 b0 07 00 00 48 8d 7a 08 65 48 8b 04 25 28 00 00 00 48 89 RSP: 0018:ffffa166c671bb80 EFLAGS: 00010282 RAX: 00000000000040ca RBX: ffffa166c671bc60 RCX: ffffa166c671bc60 RDX: ffffa166c671bc60 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffffa166c671bbe0 R08: ffff9448cc3930c0 R09: 0000000000000000 R10: ffffa166c671bd50 R11: ffffffff9ee89260 R12: 0000000000000000 R13: ffff9448ce099480 R14: 0000000000000000 R15: ffff9448cff5b000 FS: 00007f786e225900(0000) GS:ffff94493bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000007b0 CR3: 000000010d39c000 CR4: 0000000000750ef0 PKRU: 55555554 Kernel panic - not syncing: Fatal exception Kernel Offset: 0x1ce00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) Fixes: 3fcb9d1 ("io_uring/sqpoll: statistics of the true utilization of sq threads") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20240309003256.358-1-krisman@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
kernel-patches-daemon-bpf-rc bot
pushed a commit
that referenced
this pull request
Jun 5, 2024
Alexei reported that send_signal test may fail with nested CONFIG_PARAVIRT configs. In this particular case, the base vm is AMD with 166 cpus, and I run selftests with regular qemu on top of that and indeed send_signal test failed. I also tried with an intel box with 80 cpus and there is no issue. The main qemu command line includes -enable-kvm -smp 16 -cpu host The failure log looks like: $ ./test_progs -t send_signal [ 48.501588] watchdog: BUG: soft lockup - CPU#9 stuck for 26s! [test_progs:2225] [ 48.503622] Modules linked in: bpf_testmod(O) [ 48.503622] CPU: 9 PID: 2225 Comm: test_progs Tainted: G O 6.9.0-08561-g2c1713a8f1c9-dirty #69 [ 48.507629] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014 [ 48.511635] RIP: 0010:handle_softirqs+0x71/0x290 [ 48.511635] Code: 0f b7 25 f2 f4 fa 7e 65 81 05 cf f4 fa 7e 00 01 00 00 c7 44 24 10 0a 00 00 00 31 c0 65 66 89 05 d5 f4 fa 7e fb bb ff ff ff ff <49> c7 c2 cb [ 48.518527] RSP: 0018:ffffc90000310fa0 EFLAGS: 00000246 [ 48.519579] RAX: 0000000000000000 RBX: 00000000ffffffff RCX: 00000000000006e0 [ 48.522526] RDX: 0000000000000006 RSI: ffff88810791ae80 RDI: 0000000000000000 [ 48.523587] RBP: ffffc90000fabc88 R08: 00000005a0af4f7f R09: 0000000000000000 [ 48.525525] R10: 0000000561d2f29c R11: 0000000000006534 R12: 0000000000000280 [ 48.528525] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 48.528525] FS: 00007f2f2885cd00(0000) GS:ffff888237c40000(0000) knlGS:0000000000000000 [ 48.531600] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 48.535520] CR2: 00007f2f287059f0 CR3: 0000000106a28002 CR4: 00000000003706f0 [ 48.537538] Call Trace: [ 48.537538] <IRQ> [ 48.537538] ? watchdog_timer_fn+0x1cd/0x250 [ 48.539590] ? lockup_detector_update_enable+0x50/0x50 [ 48.539590] ? __hrtimer_run_queues+0xff/0x280 [ 48.542520] ? hrtimer_interrupt+0x103/0x230 [ 48.544524] ? __sysvec_apic_timer_interrupt+0x4f/0x140 [ 48.545522] ? sysvec_apic_timer_interrupt+0x3a/0x90 [ 48.547612] ? asm_sysvec_apic_timer_interrupt+0x1a/0x20 [ 48.547612] ? handle_softirqs+0x71/0x290 [ 48.547612] irq_exit_rcu+0x63/0x80 [ 48.551585] sysvec_apic_timer_interrupt+0x75/0x90 [ 48.552521] </IRQ> [ 48.553529] <TASK> [ 48.553529] asm_sysvec_apic_timer_interrupt+0x1a/0x20 [ 48.555609] RIP: 0010:finish_task_switch.isra.0+0x90/0x260 [ 48.556526] Code: 00 0f 1f 44 00 00 41 c7 44 24 34 00 00 00 00 49 8b 9f 58 0a 00 00 48 85 db 0f 85 89 01 00 00 4c 89 ff e8 53 d9 bd 00 fb 66 90 <4d> 85 ed 74 [ 48.562524] RSP: 0018:ffffc90000fabd38 EFLAGS: 00000282 [ 48.563589] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff83385620 [ 48.563589] RDX: ffff888237c73ae4 RSI: 0000000000000000 RDI: ffff888237c6fd00 [ 48.568521] RBP: ffffc90000fabd68 R08: 0000000000000000 R09: 0000000000000000 [ 48.569528] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8881009d0000 [ 48.573525] R13: ffff8881024e5400 R14: ffff88810791ae80 R15: ffff888237c6fd00 [ 48.575614] ? finish_task_switch.isra.0+0x8d/0x260 [ 48.576523] __schedule+0x364/0xac0 [ 48.577535] schedule+0x2e/0x110 [ 48.578555] pipe_read+0x301/0x400 [ 48.579589] ? destroy_sched_domains_rcu+0x30/0x30 [ 48.579589] vfs_read+0x2b3/0x2f0 [ 48.579589] ksys_read+0x8b/0xc0 [ 48.583590] do_syscall_64+0x3d/0xc0 [ 48.583590] entry_SYSCALL_64_after_hwframe+0x4b/0x53 [ 48.586525] RIP: 0033:0x7f2f28703fa1 [ 48.587592] Code: ff ff eb c3 67 e8 2f c9 01 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 f3 0f 1e fa 80 3d c5 23 14 00 00 74 13 31 c0 0f 05 <48> 3d 00 f0 [ 48.593534] RSP: 002b:00007ffd90f8cf88 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 48.595589] RAX: ffffffffffffffda RBX: 00007ffd90f8d5e8 RCX: 00007f2f28703fa1 [ 48.595589] RDX: 0000000000000001 RSI: 00007ffd90f8cfb0 RDI: 0000000000000006 [ 48.599592] RBP: 00007ffd90f8d2f0 R08: 0000000000000064 R09: 0000000000000000 [ 48.602527] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 48.603589] R13: 00007ffd90f8d608 R14: 00007f2f288d8000 R15: 0000000000f6bdb0 [ 48.605527] </TASK> In the test, two processes are communicated through pipe. Furhter debugging with strace found that the above splat is triggered as read() syscall could not receive the data even if the corresponding write() syscall in another process successfully wrote data into the pipe. The failed subtest is "send_signal_perf". The corresponding perf event has sample_period 1 and config PERF_COUNT_SW_CPU_CLOCK. sample_period 1 means every overflow event will trigger a call to bpf program. So I suspect this may overwhelm the system. So I increased the sample_period to 100000 and the test passed. The sample_period 10000 still has the test failed. In other parts of selftest, e.g., [1], sample_freq is used instead. So I decided to use sample_freq = 1000 since the test can pass as well. [1] https://lore.kernel.org/bpf/20240604070700.3032142-1-song@kernel.org/ Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
kernel-patches-daemon-bpf-rc bot
pushed a commit
that referenced
this pull request
Jun 6, 2024
Alexei reported that send_signal test may fail with nested CONFIG_PARAVIRT configs. In this particular case, the base VM is AMD with 166 cpus, and I run selftests with regular qemu on top of that and indeed send_signal test failed. I also tried with an Intel box with 80 cpus and there is no issue. The main qemu command line includes: -enable-kvm -smp 16 -cpu host The failure log looks like: $ ./test_progs -t send_signal [ 48.501588] watchdog: BUG: soft lockup - CPU#9 stuck for 26s! [test_progs:2225] [ 48.503622] Modules linked in: bpf_testmod(O) [ 48.503622] CPU: 9 PID: 2225 Comm: test_progs Tainted: G O 6.9.0-08561-g2c1713a8f1c9-dirty #69 [ 48.507629] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014 [ 48.511635] RIP: 0010:handle_softirqs+0x71/0x290 [ 48.511635] Code: [...] 10 0a 00 00 00 31 c0 65 66 89 05 d5 f4 fa 7e fb bb ff ff ff ff <49> c7 c2 cb [ 48.518527] RSP: 0018:ffffc90000310fa0 EFLAGS: 00000246 [ 48.519579] RAX: 0000000000000000 RBX: 00000000ffffffff RCX: 00000000000006e0 [ 48.522526] RDX: 0000000000000006 RSI: ffff88810791ae80 RDI: 0000000000000000 [ 48.523587] RBP: ffffc90000fabc88 R08: 00000005a0af4f7f R09: 0000000000000000 [ 48.525525] R10: 0000000561d2f29c R11: 0000000000006534 R12: 0000000000000280 [ 48.528525] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 48.528525] FS: 00007f2f2885cd00(0000) GS:ffff888237c40000(0000) knlGS:0000000000000000 [ 48.531600] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 48.535520] CR2: 00007f2f287059f0 CR3: 0000000106a28002 CR4: 00000000003706f0 [ 48.537538] Call Trace: [ 48.537538] <IRQ> [ 48.537538] ? watchdog_timer_fn+0x1cd/0x250 [ 48.539590] ? lockup_detector_update_enable+0x50/0x50 [ 48.539590] ? __hrtimer_run_queues+0xff/0x280 [ 48.542520] ? hrtimer_interrupt+0x103/0x230 [ 48.544524] ? __sysvec_apic_timer_interrupt+0x4f/0x140 [ 48.545522] ? sysvec_apic_timer_interrupt+0x3a/0x90 [ 48.547612] ? asm_sysvec_apic_timer_interrupt+0x1a/0x20 [ 48.547612] ? handle_softirqs+0x71/0x290 [ 48.547612] irq_exit_rcu+0x63/0x80 [ 48.551585] sysvec_apic_timer_interrupt+0x75/0x90 [ 48.552521] </IRQ> [ 48.553529] <TASK> [ 48.553529] asm_sysvec_apic_timer_interrupt+0x1a/0x20 [ 48.555609] RIP: 0010:finish_task_switch.isra.0+0x90/0x260 [ 48.556526] Code: [...] 9f 58 0a 00 00 48 85 db 0f 85 89 01 00 00 4c 89 ff e8 53 d9 bd 00 fb 66 90 <4d> 85 ed 74 [ 48.562524] RSP: 0018:ffffc90000fabd38 EFLAGS: 00000282 [ 48.563589] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff83385620 [ 48.563589] RDX: ffff888237c73ae4 RSI: 0000000000000000 RDI: ffff888237c6fd00 [ 48.568521] RBP: ffffc90000fabd68 R08: 0000000000000000 R09: 0000000000000000 [ 48.569528] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8881009d0000 [ 48.573525] R13: ffff8881024e5400 R14: ffff88810791ae80 R15: ffff888237c6fd00 [ 48.575614] ? finish_task_switch.isra.0+0x8d/0x260 [ 48.576523] __schedule+0x364/0xac0 [ 48.577535] schedule+0x2e/0x110 [ 48.578555] pipe_read+0x301/0x400 [ 48.579589] ? destroy_sched_domains_rcu+0x30/0x30 [ 48.579589] vfs_read+0x2b3/0x2f0 [ 48.579589] ksys_read+0x8b/0xc0 [ 48.583590] do_syscall_64+0x3d/0xc0 [ 48.583590] entry_SYSCALL_64_after_hwframe+0x4b/0x53 [ 48.586525] RIP: 0033:0x7f2f28703fa1 [ 48.587592] Code: [...] 00 00 00 0f 1f 44 00 00 f3 0f 1e fa 80 3d c5 23 14 00 00 74 13 31 c0 0f 05 <48> 3d 00 f0 [ 48.593534] RSP: 002b:00007ffd90f8cf88 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 48.595589] RAX: ffffffffffffffda RBX: 00007ffd90f8d5e8 RCX: 00007f2f28703fa1 [ 48.595589] RDX: 0000000000000001 RSI: 00007ffd90f8cfb0 RDI: 0000000000000006 [ 48.599592] RBP: 00007ffd90f8d2f0 R08: 0000000000000064 R09: 0000000000000000 [ 48.602527] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 48.603589] R13: 00007ffd90f8d608 R14: 00007f2f288d8000 R15: 0000000000f6bdb0 [ 48.605527] </TASK> In the test, two processes are communicating through pipe. Further debugging with strace found that the above splat is triggered as read() syscall could not receive the data even if the corresponding write() syscall in another process successfully wrote data into the pipe. The failed subtest is "send_signal_perf". The corresponding perf event has sample_period 1 and config PERF_COUNT_SW_CPU_CLOCK. sample_period 1 means every overflow event will trigger a call to the BPF program. So I suspect this may overwhelm the system. So I increased the sample_period to 100,000 and the test passed. The sample_period 10,000 still has the test failed. In other parts of selftest, e.g., [1], sample_freq is used instead. So I decided to use sample_freq = 1,000 since the test can pass as well. [1] https://lore.kernel.org/bpf/20240604070700.3032142-1-song@kernel.org/ Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240605201203.2603846-1-yonghong.song@linux.dev
kernel-patches-daemon-bpf-rc bot
pushed a commit
that referenced
this pull request
Nov 5, 2024
The flexspi on imx8ulp only has 16 LUTs, and imx8mm flexspi has 32 LUTs, so correct the compatible string here, otherwise will meet below error: [ 1.119072] ------------[ cut here ]------------ [ 1.123926] WARNING: CPU: 0 PID: 1 at drivers/spi/spi-nxp-fspi.c:855 nxp_fspi_exec_op+0xb04/0xb64 [ 1.133239] Modules linked in: [ 1.136448] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240902-00001-g131bf9439dd9 #69 [ 1.146821] Hardware name: NXP i.MX8ULP EVK (DT) [ 1.151647] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 1.158931] pc : nxp_fspi_exec_op+0xb04/0xb64 [ 1.163496] lr : nxp_fspi_exec_op+0xa34/0xb64 [ 1.168060] sp : ffff80008002b2a0 [ 1.171526] x29: ffff80008002b2d0 x28: 0000000000000000 x27: 0000000000000000 [ 1.179002] x26: ffff2eb645542580 x25: ffff800080610014 x24: ffff800080610000 [ 1.186480] x23: ffff2eb645548080 x22: 0000000000000006 x21: ffff2eb6455425e0 [ 1.193956] x20: 0000000000000000 x19: ffff80008002b5e0 x18: ffffffffffffffff [ 1.201432] x17: ffff2eb644467508 x16: 0000000000000138 x15: 0000000000000002 [ 1.208907] x14: 0000000000000000 x13: ffff2eb6400d8080 x12: 00000000ffffff00 [ 1.216378] x11: 0000000000000000 x10: ffff2eb6400d8080 x9 : ffff2eb697adca80 [ 1.223850] x8 : ffff2eb697ad3cc0 x7 : 0000000100000000 x6 : 0000000000000001 [ 1.231324] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 00000000000007a6 [ 1.238795] x2 : 0000000000000000 x1 : 00000000000001ce x0 : 00000000ffffff92 [ 1.246267] Call trace: [ 1.248824] nxp_fspi_exec_op+0xb04/0xb64 [ 1.253031] spi_mem_exec_op+0x3a0/0x430 [ 1.257139] spi_nor_read_id+0x80/0xcc [ 1.261065] spi_nor_scan+0x1ec/0xf10 [ 1.264901] spi_nor_probe+0x108/0x2fc [ 1.268828] spi_mem_probe+0x6c/0xbc [ 1.272574] spi_probe+0x84/0xe4 [ 1.275958] really_probe+0xbc/0x29c [ 1.279713] __driver_probe_device+0x78/0x12c [ 1.284277] driver_probe_device+0xd8/0x15c [ 1.288660] __device_attach_driver+0xb8/0x134 [ 1.293316] bus_for_each_drv+0x88/0xe8 [ 1.297337] __device_attach+0xa0/0x190 [ 1.301353] device_initial_probe+0x14/0x20 [ 1.305734] bus_probe_device+0xac/0xb0 [ 1.309752] device_add+0x5d0/0x790 [ 1.313408] __spi_add_device+0x134/0x204 [ 1.317606] of_register_spi_device+0x3b4/0x590 [ 1.322348] spi_register_controller+0x47c/0x754 [ 1.327181] devm_spi_register_controller+0x4c/0xa4 [ 1.332289] nxp_fspi_probe+0x1cc/0x2b0 [ 1.336307] platform_probe+0x68/0xc4 [ 1.340145] really_probe+0xbc/0x29c [ 1.343893] __driver_probe_device+0x78/0x12c [ 1.348457] driver_probe_device+0xd8/0x15c [ 1.352838] __driver_attach+0x90/0x19c [ 1.356857] bus_for_each_dev+0x7c/0xdc [ 1.360877] driver_attach+0x24/0x30 [ 1.364624] bus_add_driver+0xe4/0x208 [ 1.368552] driver_register+0x5c/0x124 [ 1.372573] __platform_driver_register+0x28/0x34 [ 1.377497] nxp_fspi_driver_init+0x1c/0x28 [ 1.381888] do_one_initcall+0x80/0x1c8 [ 1.385908] kernel_init_freeable+0x1c4/0x28c [ 1.390472] kernel_init+0x20/0x1d8 [ 1.394138] ret_from_fork+0x10/0x20 [ 1.397885] ---[ end trace 0000000000000000 ]--- [ 1.407908] ------------[ cut here ]------------ Fixes: ef89fd5 ("arm64: dts: imx8ulp: add flexspi node") Cc: stable@kernel.org Signed-off-by: Haibo Chen <haibo.chen@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org>
kernel-patches-daemon-bpf-rc bot
pushed a commit
that referenced
this pull request
Dec 21, 2024
…uctions Add the following ./test_progs tests: * atomics/load_acquire * atomics/store_release * arena_atomics/load_acquire * arena_atomics/store_release They depend on the pre-defined __BPF_FEATURE_LOAD_ACQ_STORE_REL feature macro, which implies -mcpu>=v4. $ ALLOWLIST=atomics/load_acquire,atomics/store_release, $ ALLOWLIST+=arena_atomics/load_acquire,arena_atomics/store_release $ ./test_progs-cpuv4 -a $ALLOWLIST #3/9 arena_atomics/load_acquire:OK #3/10 arena_atomics/store_release:OK ... #10/8 atomics/load_acquire:OK #10/9 atomics/store_release:OK $ ./test_progs -v -a $ALLOWLIST test_load_acquire:SKIP:Clang does not support BPF load-acquire or addr_space_cast #3/9 arena_atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release or addr_space_cast #3/10 arena_atomics/store_release:SKIP ... test_load_acquire:SKIP:Clang does not support BPF load-acquire #10/8 atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release #10/9 atomics/store_release:SKIP Additionally, add several ./test_verifier tests: #65/u atomic BPF_LOAD_ACQ access through non-pointer OK #65/p atomic BPF_LOAD_ACQ access through non-pointer OK #66/u atomic BPF_STORE_REL access through non-pointer OK #66/p atomic BPF_STORE_REL access through non-pointer OK #67/u BPF_ATOMIC load-acquire, 8-bit OK #67/p BPF_ATOMIC load-acquire, 8-bit OK #68/u BPF_ATOMIC load-acquire, 16-bit OK #68/p BPF_ATOMIC load-acquire, 16-bit OK #69/u BPF_ATOMIC load-acquire, 32-bit OK #69/p BPF_ATOMIC load-acquire, 32-bit OK #70/u BPF_ATOMIC load-acquire, 64-bit OK #70/p BPF_ATOMIC load-acquire, 64-bit OK #71/u Cannot load-acquire from uninitialized src_reg OK #71/p Cannot load-acquire from uninitialized src_reg OK #76/u BPF_ATOMIC store-release, 8-bit OK #76/p BPF_ATOMIC store-release, 8-bit OK #77/u BPF_ATOMIC store-release, 16-bit OK #77/p BPF_ATOMIC store-release, 16-bit OK #78/u BPF_ATOMIC store-release, 32-bit OK #78/p BPF_ATOMIC store-release, 32-bit OK #79/u BPF_ATOMIC store-release, 64-bit OK #79/p BPF_ATOMIC store-release, 64-bit OK #80/u Cannot store-release from uninitialized src_reg OK #80/p Cannot store-release from uninitialized src_reg OK Reviewed-by: Josh Don <joshdon@google.com> Signed-off-by: Peilin Ye <yepeilin@google.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull request for series with
subject: bpf: Refuse to mount bpffs on the same mount point multiple times
version: 1
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=617159