Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpftool: bpf skeletons assert type sizes #70

Closed
wants to merge 2 commits into from

Conversation

kernel-patches-bot
Copy link

Pull request for series with
subject: bpftool: bpf skeletons assert type sizes
version: 4
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=617326

@kernel-patches-bot
Copy link
Author

Master branch: a19df71
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=617326
version: 4

@kernel-patches-bot
Copy link
Author

Master branch: 08894d9
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=617326
version: 4

Nobody and others added 2 commits February 23, 2022 15:27
When emitting type declarations in skeletons, bpftool will now also emit
static assertions on the size of the data/bss/rodata/etc fields. This
ensures that in situations where userspace and kernel types have the same
name but differ in size we do not silently produce incorrect results but
instead break the build.

This was reported in [1] and as expected the repro in [2] fails to build
on the new size assert after this change.

  [1]: Closes: libbpf/libbpf#433
  [2]: https://github.com/fuweid/iovisor-bcc-pr-3777

Signed-off-by: Delyan Kratunov <delyank@fb.com>
Acked-by: Hengqi Chen <hengqi.chen@gmail.com>
Tested-by: Hengqi Chen <hengqi.chen@gmail.com>
@kernel-patches-bot
Copy link
Author

Master branch: c561d11
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=617326
version: 4

@kernel-patches-bot
Copy link
Author

Master branch: 08d4dba
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=617326
version: 4

Pull request is NOT updated. Failed to apply https://patchwork.kernel.org/project/netdevbpf/list/?series=617326
error message:

Cmd('git') failed due to: exit code(128)
  cmdline: git am -3
  stdout: 'Applying: bpftool: bpf skeletons assert type sizes
Using index info to reconstruct a base tree...
M	tools/bpf/bpftool/gen.c
Falling back to patching base and 3-way merge...
Auto-merging tools/bpf/bpftool/gen.c
CONFLICT (content): Merge conflict in tools/bpf/bpftool/gen.c
Patch failed at 0001 bpftool: bpf skeletons assert type sizes
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".'
  stderr: 'error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch'

conflict:

diff --cc tools/bpf/bpftool/gen.c
index 145734b4fe41,a42545bcb12d..000000000000
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@@ -209,8 -209,10 +209,15 @@@ static int codegen_datasec_def(struct b
  	return 0;
  }
  
++<<<<<<< HEAD
 +static const struct btf_type *find_type_for_map(struct btf *btf, const char *map_ident)
 +{
++=======
+ static const struct btf_type *find_type_for_map(struct bpf_object *obj,
+ 						const char *map_ident)
+ {
+ 	struct btf *btf = bpf_object__btf(obj);
++>>>>>>> bpftool: bpf skeletons assert type sizes
  	int n = btf__type_cnt(btf), i;
  	char sec_ident[256];
  
@@@ -255,7 -257,7 +262,11 @@@ static int codegen_datasecs(struct bpf_
  		if (!get_map_ident(map, map_ident, sizeof(map_ident)))
  			continue;
  
++<<<<<<< HEAD
 +		sec = find_type_for_map(btf, map_ident);
++=======
+ 		sec = find_type_for_map(obj, map_ident);
++>>>>>>> bpftool: bpf skeletons assert type sizes
  
  		/* In some cases (e.g., sections like .rodata.cst16 containing
  		 * compiler allocated string constants only) there will be
@@@ -368,7 -370,8 +379,12 @@@ static size_t bpf_map_mmap_sz(const str
  	return map_sz;
  }
  
++<<<<<<< HEAD
 +/* Emit type size asserts for all top-level fields in memory-mapped internal maps. */
++=======
+ /* Emit type size asserts for all top-level fields in memory-mapped internal maps.
+  */
++>>>>>>> bpftool: bpf skeletons assert type sizes
  static void codegen_asserts(struct bpf_object *obj, const char *obj_name)
  {
  	struct btf *btf = bpf_object__btf(obj);
@@@ -380,12 -383,14 +396,23 @@@
  
  	codegen("\
  		\n\
++<<<<<<< HEAD
 +		__attribute__((unused)) static void			    \n\
 +		%1$s__assert(struct %1$s *s)				    \n\
 +		{							    \n\
 +		#ifdef __cplusplus					    \n\
 +		#define _Static_assert static_assert			    \n\
 +		#endif							    \n\
++=======
+ 									    \n\
+ 		#ifdef __cplusplus					    \n\
+ 		#define _Static_assert static_assert			    \n\
+ 		#endif							    \n\
+ 									    \n\
+ 		__attribute__((unused)) static void			    \n\
+ 		%1$s__type_asserts(struct %1$s *s)			    \n\
+ 		{							    \n\
++>>>>>>> bpftool: bpf skeletons assert type sizes
  		", obj_name);
  
  	bpf_object__for_each_map(map, obj) {
@@@ -396,7 -401,7 +423,11 @@@
  		if (!get_map_ident(map, map_ident, sizeof(map_ident)))
  			continue;
  
++<<<<<<< HEAD
 +		sec = find_type_for_map(btf, map_ident);
++=======
+ 		sec = find_type_for_map(obj, map_ident);
++>>>>>>> bpftool: bpf skeletons assert type sizes
  		if (!sec) {
  			/* best effort, couldn't find the type for this map */
  			continue;
@@@ -408,33 -413,35 +439,61 @@@
  		for (i = 0; i < vlen; i++, sec_var++) {
  			const struct btf_type *var = btf__type_by_id(btf, sec_var->type);
  			const char *var_name = btf__name_by_offset(btf, var->name_off);
++<<<<<<< HEAD
 +			long var_size;
 +
 +			/* static variables are not exposed through BPF skeleton */
 +			if (btf_var(var)->linkage == BTF_VAR_STATIC)
 +				continue;
 +
 +			var_size = btf__resolve_size(btf, var->type);
 +			if (var_size < 0)
++=======
+ 			__u32 var_type_id = var->type;
+ 			__s64 var_size = btf__resolve_size(btf, var_type_id);
+ 
+ 			if (var_size < 0)
+ 				continue;
+ 
+ 			/* static variables are not exposed through BPF skeleton */
+ 			if (btf_var(var)->linkage == BTF_VAR_STATIC)
++>>>>>>> bpftool: bpf skeletons assert type sizes
  				continue;
  
  			var_ident[0] = '\0';
  			strncat(var_ident, var_name, sizeof(var_ident) - 1);
  			sanitize_identifier(var_ident);
  
++<<<<<<< HEAD
 +			printf("\t_Static_assert(sizeof(s->%s->%s) == %ld, \"unexpected size of '%s'\");\n",
 +			       map_ident, var_ident, var_size, var_ident);
++=======
+ 			printf("\t_Static_assert(sizeof(s->%1$s->%2$s) == %3$lld, \"unexpected size of '%2$s'\");\n",
+ 			       map_ident, var_ident, var_size);
++>>>>>>> bpftool: bpf skeletons assert type sizes
  		}
  	}
  	codegen("\
  		\n\
++<<<<<<< HEAD
 +		#ifdef __cplusplus					    \n\
 +		#undef _Static_assert					    \n\
 +		#endif							    \n\
 +		}							    \n\
 +		");
 +}
 +
++=======
+ 		}							    \n\
+ 									    \n\
+ 		#ifdef __cplusplus					    \n\
+ 		#undef _Static_assert					    \n\
+ 		#endif							    \n\
+ 		");
+ }
+ 
+ 
++>>>>>>> bpftool: bpf skeletons assert type sizes
  static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name)
  {
  	struct bpf_program *prog;

@kernel-patches-bot
Copy link
Author

At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=617326 irrelevant now. Closing PR.

@kernel-patches-bot kernel-patches-bot deleted the series/617326=>bpf-next branch February 24, 2022 01:45
kernel-patches-bot pushed a commit that referenced this pull request Jun 9, 2022
The srv_mutex is used during writeback so cifs should ensure that
allocations done when that mutex is held are done with GFP_NOFS, to
avoid having direct reclaim ending up waiting for the same mutex and
causing a deadlock.  This is detected by lockdep with the splat below:

 ======================================================
 WARNING: possible circular locking dependency detected
 5.18.0 #70 Not tainted
 ------------------------------------------------------
 kswapd0/49 is trying to acquire lock:
 ffff8880195782e0 (&tcp_ses->srv_mutex){+.+.}-{3:3}, at: compound_send_recv

 but task is already holding lock:
 ffffffffa98e66c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat

 which lock already depends on the new lock.

 the existing dependency chain (in reverse order) is:

 -> #1 (fs_reclaim){+.+.}-{0:0}:
        fs_reclaim_acquire
        kmem_cache_alloc_trace
        __request_module
        crypto_alg_mod_lookup
        crypto_alloc_tfm_node
        crypto_alloc_shash
        cifs_alloc_hash
        smb311_crypto_shash_allocate
        smb311_update_preauth_hash
        compound_send_recv
        cifs_send_recv
        SMB2_negotiate
        smb2_negotiate
        cifs_negotiate_protocol
        cifs_get_smb_ses
        cifs_mount
        cifs_smb3_do_mount
        smb3_get_tree
        vfs_get_tree
        path_mount
        __x64_sys_mount
        do_syscall_64
        entry_SYSCALL_64_after_hwframe

 -> #0 (&tcp_ses->srv_mutex){+.+.}-{3:3}:
        __lock_acquire
        lock_acquire
        __mutex_lock
        mutex_lock_nested
        compound_send_recv
        cifs_send_recv
        SMB2_write
        smb2_sync_write
        cifs_write
        cifs_writepage_locked
        cifs_writepage
        shrink_page_list
        shrink_lruvec
        shrink_node
        balance_pgdat
        kswapd
        kthread
        ret_from_fork

 other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(fs_reclaim);
                                lock(&tcp_ses->srv_mutex);
                                lock(fs_reclaim);
   lock(&tcp_ses->srv_mutex);

  *** DEADLOCK ***

 1 lock held by kswapd0/49:
  #0: ffffffffa98e66c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat

 stack backtrace:
 CPU: 2 PID: 49 Comm: kswapd0 Not tainted 5.18.0 #70
 Call Trace:
  <TASK>
  dump_stack_lvl
  dump_stack
  print_circular_bug.cold
  check_noncircular
  __lock_acquire
  lock_acquire
  __mutex_lock
  mutex_lock_nested
  compound_send_recv
  cifs_send_recv
  SMB2_write
  smb2_sync_write
  cifs_write
  cifs_writepage_locked
  cifs_writepage
  shrink_page_list
  shrink_lruvec
  shrink_node
  balance_pgdat
  kswapd
  kthread
  ret_from_fork
  </TASK>

Fix this by using the memalloc_nofs_save/restore APIs around the places
where the srv_mutex is held.  Do this in a wrapper function for the
lock/unlock of the srv_mutex, and rename the srv_mutex to avoid missing
call sites in the conversion.

Note that there is another lockdep warning involving internal crypto
locks, which was masked by this problem and is visible after this fix,
see the discussion in this thread:

 https://lore.kernel.org/all/20220523123755.GA13668@axis.com/

Link: https://lore.kernel.org/r/CANT5p=rqcYfYMVHirqvdnnca4Mo+JQSw5Qu12v=kPfpk5yhhmg@mail.gmail.com/
Reported-by: Shyam Prasad N <nspmangalore@gmail.com>
Suggested-by: Lars Persson <larper@axis.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
Reviewed-by: Enzo Matsumiya <ematsumiya@suse.de>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
kernel-patches-daemon-bpf-rc bot pushed a commit that referenced this pull request Dec 21, 2023
… when the system is sleeping

Some channels may be masked. When the system is suspended,
if these masked channels are not filtered out, this will
lead to null pointer operations and system crash:

Unable to handle kernel NULL pointer dereference at virtual address
Mem abort info:
ESR = 0x0000000096000004
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x04: level 0 translation fault
Data abort info:
ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
CM = 0, WnR = 0, TnD = 0, TagAccess = 0
GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=0000000894300000
[00000000000002a0] pgd=0000000000000000, p4d=0000000000000000
Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 989 Comm: sh Tainted: G B 6.6.0-16203-g557fb7a3ec4c-dirty #70
Hardware name: Freescale i.MX8QM MEK (DT)
pstate: 400000c5 (nZcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  pc: fsl_edma_disable_request+0x3c/0x78
  lr: fsl_edma_disable_request+0x3c/0x78
  sp:ffff800089ae7690
  x29: ffff800089ae7690 x28: ffff000807ab5440 x27: ffff000807ab5830
  x26: 0000000000000008 x25: 0000000000000278 x24: 0000000000000001
  23: ffff000807ab4328 x22: 0000000000000000 x21: 0000000000000009
  x20: ffff800082616940 x19: 0000000000000000 x18: 0000000000000000
  x17: 3d3d3d3d3d3d3d3d x16: 3d3d3d3d3d3d3d3d x15: 3d3d3d3d3d3d3d3d
  x14: 3d3d3d3d3d3d3d3d x13: 3d3d3d3d3d3d3d3d x12: 1ffff00010d45724
  x11: ffff700010d45724 x10: dfff800000000000 x9: dfff800000000000
  x8: 00008fffef2ba8dc x7: 0000000000000001 x6: ffff800086a2b927
  x5: ffff800086a2b920 x4: ffff700010d45725 x3: ffff8000800d5bbc
  x2 : 0000000000000000 x1 : ffff000800c1d880 x0 : 0000000000000001
  Call trace:
   fsl_edma_disable_request+0x3c/0x78
   fsl_edma_suspend_late+0x128/0x12c
  dpm_run_callback+0xd4/0x304
   __device_suspend_late+0xd0/0x240
  dpm_suspend_late+0x174/0x59c
  suspend_devices_and_enter+0x194/0xd00
  pm_suspend+0x3c4/0x910

Fixes: 72f5801 ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com>
Link: https://lore.kernel.org/r/20231113225713.1892643-2-xiaolei.wang@windriver.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
kernel-patches-daemon-bpf-rc bot pushed a commit that referenced this pull request Dec 21, 2024
…uctions

Add the following ./test_progs tests:

  * atomics/load_acquire
  * atomics/store_release
  * arena_atomics/load_acquire
  * arena_atomics/store_release

They depend on the pre-defined __BPF_FEATURE_LOAD_ACQ_STORE_REL feature
macro, which implies -mcpu>=v4.

  $ ALLOWLIST=atomics/load_acquire,atomics/store_release,
  $ ALLOWLIST+=arena_atomics/load_acquire,arena_atomics/store_release

  $ ./test_progs-cpuv4 -a $ALLOWLIST

  #3/9     arena_atomics/load_acquire:OK
  #3/10    arena_atomics/store_release:OK
...
  #10/8    atomics/load_acquire:OK
  #10/9    atomics/store_release:OK

  $ ./test_progs -v -a $ALLOWLIST

  test_load_acquire:SKIP:Clang does not support BPF load-acquire or addr_space_cast
  #3/9     arena_atomics/load_acquire:SKIP
  test_store_release:SKIP:Clang does not support BPF store-release or addr_space_cast
  #3/10    arena_atomics/store_release:SKIP
...
  test_load_acquire:SKIP:Clang does not support BPF load-acquire
  #10/8    atomics/load_acquire:SKIP
  test_store_release:SKIP:Clang does not support BPF store-release
  #10/9    atomics/store_release:SKIP

Additionally, add several ./test_verifier tests:

  #65/u atomic BPF_LOAD_ACQ access through non-pointer  OK
  #65/p atomic BPF_LOAD_ACQ access through non-pointer  OK
  #66/u atomic BPF_STORE_REL access through non-pointer  OK
  #66/p atomic BPF_STORE_REL access through non-pointer  OK

  #67/u BPF_ATOMIC load-acquire, 8-bit OK
  #67/p BPF_ATOMIC load-acquire, 8-bit OK
  #68/u BPF_ATOMIC load-acquire, 16-bit OK
  #68/p BPF_ATOMIC load-acquire, 16-bit OK
  #69/u BPF_ATOMIC load-acquire, 32-bit OK
  #69/p BPF_ATOMIC load-acquire, 32-bit OK
  #70/u BPF_ATOMIC load-acquire, 64-bit OK
  #70/p BPF_ATOMIC load-acquire, 64-bit OK
  #71/u Cannot load-acquire from uninitialized src_reg OK
  #71/p Cannot load-acquire from uninitialized src_reg OK

  #76/u BPF_ATOMIC store-release, 8-bit OK
  #76/p BPF_ATOMIC store-release, 8-bit OK
  #77/u BPF_ATOMIC store-release, 16-bit OK
  #77/p BPF_ATOMIC store-release, 16-bit OK
  #78/u BPF_ATOMIC store-release, 32-bit OK
  #78/p BPF_ATOMIC store-release, 32-bit OK
  #79/u BPF_ATOMIC store-release, 64-bit OK
  #79/p BPF_ATOMIC store-release, 64-bit OK
  #80/u Cannot store-release from uninitialized src_reg OK
  #80/p Cannot store-release from uninitialized src_reg OK

Reviewed-by: Josh Don <joshdon@google.com>
Signed-off-by: Peilin Ye <yepeilin@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant