Skip to content

Commit 9de3e81

Browse files
Alexei Starovoitovborkmann
authored andcommitted
bpf: Let free_all() return the number of freed elements.
Let free_all() helper return the number of freed elements. It's not used in this patch, but helps in debug/development of bpf_mem_alloc. For example this diff for __free_rcu(): - free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size); + printk("cpu %d freed %d objs after tasks trace\n", raw_smp_processor_id(), + free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size)); would show how busy RCU tasks trace is. In artificial benchmark where one cpu is allocating and different cpu is freeing the RCU tasks trace won't be able to keep up and the list of objects would keep growing from thousands to millions and eventually OOMing. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/bpf/20230706033447.54696-4-alexei.starovoitov@gmail.com
1 parent a80672d commit 9de3e81

File tree

1 file changed

+6
-2
lines changed

1 file changed

+6
-2
lines changed

kernel/bpf/memalloc.c

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -223,12 +223,16 @@ static void free_one(void *obj, bool percpu)
223223
kfree(obj);
224224
}
225225

226-
static void free_all(struct llist_node *llnode, bool percpu)
226+
static int free_all(struct llist_node *llnode, bool percpu)
227227
{
228228
struct llist_node *pos, *t;
229+
int cnt = 0;
229230

230-
llist_for_each_safe(pos, t, llnode)
231+
llist_for_each_safe(pos, t, llnode) {
231232
free_one(pos, percpu);
233+
cnt++;
234+
}
235+
return cnt;
232236
}
233237

234238
static void __free_rcu(struct rcu_head *head)

0 commit comments

Comments
 (0)