Skip to content

Commit a9921ce

Browse files
author
Alexei Starovoitov
committed
Merge branch 'mvneta: introduce XDP multi-buffer support'
Lorenzo Bianconi says: ==================== This series introduces XDP frags support. The mvneta driver is the first to support these new "non-linear" xdp_{buff,frame}. Reviewers please focus on how these new types of xdp_{buff,frame} packets traverse the different layers and the layout design. It is on purpose that BPF-helpers are kept simple, as we don't want to expose the internal layout to allow later changes. The main idea for the new XDP frags layout is to reuse the same structure used for non-linear SKB. This rely on the "skb_shared_info" struct at the end of the first buffer to link together subsequent buffers. Keeping the layout compatible with SKBs is also done to ease and speedup creating a SKB from an xdp_{buff,frame}. Converting xdp_frame to SKB and deliver it to the network stack is shown in patch 05/18 (e.g. cpumaps). A frags bit (XDP_FLAGS_HAS_FRAGS) has been introduced in the flags field of xdp_{buff,frame} structure to notify the bpf/network layer if this is a non-linear xdp frame (XDP_FLAGS_HAS_FRAGS set) or not (XDP_FLAGS_HAS_FRAGS not set). The frags bit will be set by a xdp frags capable driver only for non-linear frames maintaining the capability to receive linear frames without any extra cost since the skb_shared_info structure at the end of the first buffer will be initialized only if XDP_FLAGS_HAS_FRAGS bit is set. Moreover the flags field in xdp_{buff,frame} will be reused even for xdp rx csum offloading in future series. Typical use cases for this series are: - Jumbo-frames - Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0]) - TSO/GRO for XDP_REDIRECT The three following ebpf helpers (and related selftests) has been introduced: - bpf_xdp_load_bytes: This helper is provided as an easy way to load data from a xdp buffer. It can be used to load len bytes from offset from the frame associated to xdp_md, into the buffer pointed by buf. - bpf_xdp_store_bytes: Store len bytes from buffer buf into the frame associated to xdp_md, at offset. - bpf_xdp_get_buff_len: Return the total frame size (linear + paged parts) bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into account non-linear xdp frames. Moreover, similar to skb_header_pointer, we introduced bpf_xdp_pointer utility routine to return a pointer to a given position in the xdp_buff if the requested area (offset + len) is contained in a contiguous memory area otherwise it must be copied in a bounce buffer provided by the caller running bpf_xdp_copy_buf(). BPF_F_XDP_HAS_FRAGS flag has been introduced to notify the kernel the eBPF program fully support xdp frags. SEC("xdp.frags"), SEC_DEF("xdp.frags/devmap") and SEC_DEF("xdp.frags/cpumap") have been introduced to declare xdp frags support. The NIC driver is expected to reject an eBPF program if it is running in XDP frags mode and the program does not support XDP frags. In the same way it is not possible to mix XDP frags and XDP legacy programs in a CPUMAP/DEVMAP or tailcall a XDP frags/legacy program from a legacy/frags one. More info about the main idea behind this approach can be found here [1][2]. Changes since v22: - remove leftover CHECK macro usage - reintroduce SEC_XDP_FRAGS flag in sec_def_flags - rename xdp multi_frags in xdp frags - do not report xdp_frags support in fdinfo Changes since v21: - rename *_mb in *_frags: e.g: s/xdp_buff_is_mb/xdp_buff_has_frags - rely on ASSERT_* and not on CHECK in bpf_xdp_load_bytes/bpf_xdp_store_bytes self-tests - change new multi.frags SEC definitions to use the following schema: prog_type.prog_flags/attach_place - get rid of unnecessary properties in new multi.frags SEC definitions - rebase on top of bpf-next Changes since v20: - rebase to current bpf-next Changes since v19: - do not run deprecated bpf_prog_load() - rely on skb_frag_size_add/skb_frag_size_sub in bpf_xdp_mb_increase_tail/bpf_xdp_mb_shrink_tail - rely on sinfo->nr_frags in bpf_xdp_mb_shrink_tail to check if the frame has been shrunk to a single-buffer one - allow XDP_REDIRECT of a xdp-mb frame into a CPUMAP Changes since v18: - fix bpf_xdp_copy_buf utility routine when we want to load/store data contained in frag<n> - add a selftest for bpf_xdp_load_bytes/bpf_xdp_store_bytes when the caller accesses data contained in frag<n> and frag<n+1> Changes since v17: - rework bpf_xdp_copy to squash base and frag management - remove unused variable in bpf_xdp_mb_shrink_tail() - move bpf_xdp_copy_buf() out of bpf_xdp_pointer() - add sanity check for len in bpf_xdp_pointer() - remove EXPORT_SYMBOL for __xdp_return() - introduce frag_size field in xdp_rxq_info to let the driver specify max value for xdp fragments. frag_size set to 0 means the tail increase of last the fragment is not supported. Changes since v16: - do not allow tailcalling a xdp multi-buffer/legacy program from a legacy/multi-buff one. - do not allow mixing xdp multi-buffer and xdp legacy programs in a CPUMAP/DEVMAP - add selftests for CPUMAP/DEVMAP xdp mb compatibility - disable XDP_REDIRECT for xdp multi-buff for the moment - set max offset value to 0xffff in bpf_xdp_pointer - use ARG_PTR_TO_UNINIT_MEM and ARG_CONST_SIZE for arg3_type and arg4_type of bpf_xdp_store_bytes/bpf_xdp_load_bytes Changes since v15: - let the verifier check buf is not NULL in bpf_xdp_load_bytes/bpf_xdp_store_bytes helpers - return an error if offset + length is over frame boundaries in bpf_xdp_pointer routine - introduce BPF_F_XDP_MB flag for bpf_attr to notify the kernel the eBPF program fully supports xdp multi-buffer. - reject a non XDP multi-buffer program if the driver is running in XDP multi-buffer mode. Changes since v14: - intrudce bpf_xdp_pointer utility routine and bpf_xdp_load_bytes/bpf_xdp_store_bytes helpers - drop bpf_xdp_adjust_data helper - drop xdp_frags_truesize in skb_shared_info - explode bpf_xdp_mb_adjust_tail in bpf_xdp_mb_increase_tail and bpf_xdp_mb_shrink_tail Changes since v13: - use u32 for xdp_buff/xdp_frame flags field - rename xdp_frags_tsize in xdp_frags_truesize - fixed comments Changes since v12: - fix bpf_xdp_adjust_data helper for single-buffer use case - return -EFAULT in bpf_xdp_adjust_{head,tail} in case the data pointers are not properly reset - collect ACKs from John Changes since v11: - add missing static to bpf_xdp_get_buff_len_proto structure - fix bpf_xdp_adjust_data helper when offset is smaller than linear area length. Changes since v10: - move xdp->data to the requested payload offset instead of to the beginning of the fragment in bpf_xdp_adjust_data() Changes since v9: - introduce bpf_xdp_adjust_data helper and related selftest - add xdp_frags_size and xdp_frags_tsize fields in skb_shared_info - introduce xdp_update_skb_shared_info utility routine in ordere to not reset frags array in skb_shared_info converting from a xdp_buff/xdp_frame to a skb - simplify bpf_xdp_copy routine Changes since v8: - add proper dma unmapping if XDP_TX fails on mvneta for a xdp multi-buff - switch back to skb_shared_info implementation from previous xdp_shared_info one - avoid using a bietfield in xdp_buff/xdp_frame since it introduces performance regressions. Tested now on 10G NIC (ixgbe) to verify there are no performance penalties for regular codebase - add bpf_xdp_get_buff_len helper and remove frame_length field in xdp ctx - add data_len field in skb_shared_info struct - introduce XDP_FLAGS_FRAGS_PF_MEMALLOC flag Changes since v7: - rebase on top of bpf-next - fix sparse warnings - improve comments for frame_length in include/net/xdp.h Changes since v6: - the main difference respect to previous versions is the new approach proposed by Eelco to pass full length of the packet to eBPF layer in XDP context - reintroduce multi-buff support to eBPF kself-tests - reintroduce multi-buff support to bpf_xdp_adjust_tail helper - introduce multi-buffer support to bpf_xdp_copy helper - rebase on top of bpf-next Changes since v5: - rebase on top of bpf-next - initialize mb bit in xdp_init_buff() and drop per-driver initialization - drop xdp->mb initialization in xdp_convert_zc_to_xdp_frame() - postpone introduction of frame_length field in XDP ctx to another series - minor changes Changes since v4: - rebase ontop of bpf-next - introduce xdp_shared_info to build xdp multi-buff instead of using the skb_shared_info struct - introduce frame_length in xdp ctx - drop previous bpf helpers - fix bpf_xdp_adjust_tail for xdp multi-buff - introduce xdp multi-buff self-tests for bpf_xdp_adjust_tail - fix xdp_return_frame_bulk for xdp multi-buff Changes since v3: - rebase ontop of bpf-next - add patch 10/13 to copy back paged data from a xdp multi-buff frame to userspace buffer for xdp multi-buff selftests Changes since v2: - add throughput measurements - drop bpf_xdp_adjust_mb_header bpf helper - introduce selftest for xdp multibuffer - addressed comments on bpf_xdp_get_frags_count - introduce xdp multi-buff support to cpumaps Changes since v1: - Fix use-after-free in xdp_return_{buff/frame} - Introduce bpf helpers - Introduce xdp_mb sample program - access skb_shared_info->nr_frags only on the last fragment Changes since RFC: - squash multi-buffer bit initialization in a single patch - add mvneta non-linear XDP buff support for tx side [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section) Eelco Chaudron (3): bpf: add frags support to the bpf_xdp_adjust_tail() API bpf: add frags support to xdp copy helpers bpf: selftests: update xdp_adjust_tail selftest to include xdp frags Lorenzo Bianconi (19): net: skbuff: add size metadata to skb_shared_info for xdp xdp: introduce flags field in xdp_buff/xdp_frame net: mvneta: update frags bit before passing the xdp buffer to eBPF layer net: mvneta: simplify mvneta_swbm_add_rx_fragment management net: xdp: add xdp_update_skb_shared_info utility routine net: marvell: rely on xdp_update_skb_shared_info utility routine xdp: add frags support to xdp_return_{buff/frame} net: mvneta: add frags support to XDP_TX bpf: introduce BPF_F_XDP_HAS_FRAGS flag in prog_flags loading the ebpf program net: mvneta: enable jumbo frames if the loaded XDP program support frags bpf: introduce bpf_xdp_get_buff_len helper bpf: move user_size out of bpf_test_init bpf: introduce frags support to bpf_prog_test_run_xdp() bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature libbpf: Add SEC name for xdp frags programs net: xdp: introduce bpf_xdp_pointer utility routine bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest bpf: selftests: add CPUMAP/DEVMAP selftests for xdp frags xdp: disable XDP_REDIRECT for xdp frags ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 parents 820e6e2 + ab0db46 commit a9921ce

29 files changed

+1326
-201
lines changed

drivers/net/ethernet/marvell/mvneta.c

Lines changed: 124 additions & 80 deletions
Large diffs are not rendered by default.

include/linux/bpf.h

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -194,6 +194,17 @@ struct bpf_map {
194194
struct work_struct work;
195195
struct mutex freeze_mutex;
196196
atomic64_t writecnt;
197+
/* 'Ownership' of program-containing map is claimed by the first program
198+
* that is going to use this map or by the first program which FD is
199+
* stored in the map to make sure that all callers and callees have the
200+
* same prog type, JITed flag and xdp_has_frags flag.
201+
*/
202+
struct {
203+
spinlock_t lock;
204+
enum bpf_prog_type type;
205+
bool jited;
206+
bool xdp_has_frags;
207+
} owner;
197208
};
198209

199210
static inline bool map_value_has_spin_lock(const struct bpf_map *map)
@@ -933,6 +944,7 @@ struct bpf_prog_aux {
933944
bool func_proto_unreliable;
934945
bool sleepable;
935946
bool tail_call_reachable;
947+
bool xdp_has_frags;
936948
struct hlist_node tramp_hlist;
937949
/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
938950
const struct btf_type *attach_func_proto;
@@ -993,16 +1005,6 @@ struct bpf_prog_aux {
9931005
};
9941006

9951007
struct bpf_array_aux {
996-
/* 'Ownership' of prog array is claimed by the first program that
997-
* is going to use this map or by the first program which FD is
998-
* stored in the map to make sure that all callers and callees have
999-
* the same prog type and JITed flag.
1000-
*/
1001-
struct {
1002-
spinlock_t lock;
1003-
enum bpf_prog_type type;
1004-
bool jited;
1005-
} owner;
10061008
/* Programs with direct jumps into programs part of this array. */
10071009
struct list_head poke_progs;
10081010
struct bpf_map *map;
@@ -1177,7 +1179,14 @@ struct bpf_event_entry {
11771179
struct rcu_head rcu;
11781180
};
11791181

1180-
bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *fp);
1182+
static inline bool map_type_contains_progs(struct bpf_map *map)
1183+
{
1184+
return map->map_type == BPF_MAP_TYPE_PROG_ARRAY ||
1185+
map->map_type == BPF_MAP_TYPE_DEVMAP ||
1186+
map->map_type == BPF_MAP_TYPE_CPUMAP;
1187+
}
1188+
1189+
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp);
11811190
int bpf_prog_calc_tag(struct bpf_prog *fp);
11821191

11831192
const struct bpf_func_proto *bpf_get_trace_printk_proto(void);

include/linux/skbuff.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -557,6 +557,7 @@ struct skb_shared_info {
557557
* Warning : all fields before dataref are cleared in __alloc_skb()
558558
*/
559559
atomic_t dataref;
560+
unsigned int xdp_frags_size;
560561

561562
/* Intermediate layers must ensure that destructor_arg
562563
* remains valid until skb destructor */

include/net/xdp.h

Lines changed: 104 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -60,12 +60,20 @@ struct xdp_rxq_info {
6060
u32 reg_state;
6161
struct xdp_mem_info mem;
6262
unsigned int napi_id;
63+
u32 frag_size;
6364
} ____cacheline_aligned; /* perf critical, avoid false-sharing */
6465

6566
struct xdp_txq_info {
6667
struct net_device *dev;
6768
};
6869

70+
enum xdp_buff_flags {
71+
XDP_FLAGS_HAS_FRAGS = BIT(0), /* non-linear xdp buff */
72+
XDP_FLAGS_FRAGS_PF_MEMALLOC = BIT(1), /* xdp paged memory is under
73+
* pressure
74+
*/
75+
};
76+
6977
struct xdp_buff {
7078
void *data;
7179
void *data_end;
@@ -74,13 +82,40 @@ struct xdp_buff {
7482
struct xdp_rxq_info *rxq;
7583
struct xdp_txq_info *txq;
7684
u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
85+
u32 flags; /* supported values defined in xdp_buff_flags */
7786
};
7887

88+
static __always_inline bool xdp_buff_has_frags(struct xdp_buff *xdp)
89+
{
90+
return !!(xdp->flags & XDP_FLAGS_HAS_FRAGS);
91+
}
92+
93+
static __always_inline void xdp_buff_set_frags_flag(struct xdp_buff *xdp)
94+
{
95+
xdp->flags |= XDP_FLAGS_HAS_FRAGS;
96+
}
97+
98+
static __always_inline void xdp_buff_clear_frags_flag(struct xdp_buff *xdp)
99+
{
100+
xdp->flags &= ~XDP_FLAGS_HAS_FRAGS;
101+
}
102+
103+
static __always_inline bool xdp_buff_is_frag_pfmemalloc(struct xdp_buff *xdp)
104+
{
105+
return !!(xdp->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC);
106+
}
107+
108+
static __always_inline void xdp_buff_set_frag_pfmemalloc(struct xdp_buff *xdp)
109+
{
110+
xdp->flags |= XDP_FLAGS_FRAGS_PF_MEMALLOC;
111+
}
112+
79113
static __always_inline void
80114
xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
81115
{
82116
xdp->frame_sz = frame_sz;
83117
xdp->rxq = rxq;
118+
xdp->flags = 0;
84119
}
85120

86121
static __always_inline void
@@ -111,6 +146,20 @@ xdp_get_shared_info_from_buff(struct xdp_buff *xdp)
111146
return (struct skb_shared_info *)xdp_data_hard_end(xdp);
112147
}
113148

149+
static __always_inline unsigned int xdp_get_buff_len(struct xdp_buff *xdp)
150+
{
151+
unsigned int len = xdp->data_end - xdp->data;
152+
struct skb_shared_info *sinfo;
153+
154+
if (likely(!xdp_buff_has_frags(xdp)))
155+
goto out;
156+
157+
sinfo = xdp_get_shared_info_from_buff(xdp);
158+
len += sinfo->xdp_frags_size;
159+
out:
160+
return len;
161+
}
162+
114163
struct xdp_frame {
115164
void *data;
116165
u16 len;
@@ -122,8 +171,19 @@ struct xdp_frame {
122171
*/
123172
struct xdp_mem_info mem;
124173
struct net_device *dev_rx; /* used by cpumap */
174+
u32 flags; /* supported values defined in xdp_buff_flags */
125175
};
126176

177+
static __always_inline bool xdp_frame_has_frags(struct xdp_frame *frame)
178+
{
179+
return !!(frame->flags & XDP_FLAGS_HAS_FRAGS);
180+
}
181+
182+
static __always_inline bool xdp_frame_is_frag_pfmemalloc(struct xdp_frame *frame)
183+
{
184+
return !!(frame->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC);
185+
}
186+
127187
#define XDP_BULK_QUEUE_SIZE 16
128188
struct xdp_frame_bulk {
129189
int count;
@@ -159,6 +219,19 @@ static inline void xdp_scrub_frame(struct xdp_frame *frame)
159219
frame->dev_rx = NULL;
160220
}
161221

222+
static inline void
223+
xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags,
224+
unsigned int size, unsigned int truesize,
225+
bool pfmemalloc)
226+
{
227+
skb_shinfo(skb)->nr_frags = nr_frags;
228+
229+
skb->len += size;
230+
skb->data_len += size;
231+
skb->truesize += truesize;
232+
skb->pfmemalloc |= pfmemalloc;
233+
}
234+
162235
/* Avoids inlining WARN macro in fast-path */
163236
void xdp_warn(const char *msg, const char *func, const int line);
164237
#define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__)
@@ -180,6 +253,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp)
180253
xdp->data_end = frame->data + frame->len;
181254
xdp->data_meta = frame->data - frame->metasize;
182255
xdp->frame_sz = frame->frame_sz;
256+
xdp->flags = frame->flags;
183257
}
184258

185259
static inline
@@ -206,6 +280,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp,
206280
xdp_frame->headroom = headroom - sizeof(*xdp_frame);
207281
xdp_frame->metasize = metasize;
208282
xdp_frame->frame_sz = xdp->frame_sz;
283+
xdp_frame->flags = xdp->flags;
209284

210285
return 0;
211286
}
@@ -230,6 +305,8 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
230305
return xdp_frame;
231306
}
232307

308+
void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
309+
struct xdp_buff *xdp);
233310
void xdp_return_frame(struct xdp_frame *xdpf);
234311
void xdp_return_frame_rx_napi(struct xdp_frame *xdpf);
235312
void xdp_return_buff(struct xdp_buff *xdp);
@@ -246,14 +323,37 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem);
246323
static inline void xdp_release_frame(struct xdp_frame *xdpf)
247324
{
248325
struct xdp_mem_info *mem = &xdpf->mem;
326+
struct skb_shared_info *sinfo;
327+
int i;
249328

250329
/* Curr only page_pool needs this */
251-
if (mem->type == MEM_TYPE_PAGE_POOL)
252-
__xdp_release_frame(xdpf->data, mem);
330+
if (mem->type != MEM_TYPE_PAGE_POOL)
331+
return;
332+
333+
if (likely(!xdp_frame_has_frags(xdpf)))
334+
goto out;
335+
336+
sinfo = xdp_get_shared_info_from_frame(xdpf);
337+
for (i = 0; i < sinfo->nr_frags; i++) {
338+
struct page *page = skb_frag_page(&sinfo->frags[i]);
339+
340+
__xdp_release_frame(page_address(page), mem);
341+
}
342+
out:
343+
__xdp_release_frame(xdpf->data, mem);
344+
}
345+
346+
int __xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
347+
struct net_device *dev, u32 queue_index,
348+
unsigned int napi_id, u32 frag_size);
349+
static inline int
350+
xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
351+
struct net_device *dev, u32 queue_index,
352+
unsigned int napi_id)
353+
{
354+
return __xdp_rxq_info_reg(xdp_rxq, dev, queue_index, napi_id, 0);
253355
}
254356

255-
int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
256-
struct net_device *dev, u32 queue_index, unsigned int napi_id);
257357
void xdp_rxq_info_unreg(struct xdp_rxq_info *xdp_rxq);
258358
void xdp_rxq_info_unused(struct xdp_rxq_info *xdp_rxq);
259359
bool xdp_rxq_info_is_reg(struct xdp_rxq_info *xdp_rxq);

include/uapi/linux/bpf.h

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1113,6 +1113,11 @@ enum bpf_link_type {
11131113
*/
11141114
#define BPF_F_SLEEPABLE (1U << 4)
11151115

1116+
/* If BPF_F_XDP_HAS_FRAGS is used in BPF_PROG_LOAD command, the loaded program
1117+
* fully support xdp frags.
1118+
*/
1119+
#define BPF_F_XDP_HAS_FRAGS (1U << 5)
1120+
11161121
/* When BPF ldimm64's insn[0].src_reg != 0 then this can have
11171122
* the following extensions:
11181123
*
@@ -5049,6 +5054,28 @@ union bpf_attr {
50495054
* This helper is currently supported by cgroup programs only.
50505055
* Return
50515056
* 0 on success, or a negative error in case of failure.
5057+
*
5058+
* u64 bpf_xdp_get_buff_len(struct xdp_buff *xdp_md)
5059+
* Description
5060+
* Get the total size of a given xdp buff (linear and paged area)
5061+
* Return
5062+
* The total size of a given xdp buffer.
5063+
*
5064+
* long bpf_xdp_load_bytes(struct xdp_buff *xdp_md, u32 offset, void *buf, u32 len)
5065+
* Description
5066+
* This helper is provided as an easy way to load data from a
5067+
* xdp buffer. It can be used to load *len* bytes from *offset* from
5068+
* the frame associated to *xdp_md*, into the buffer pointed by
5069+
* *buf*.
5070+
* Return
5071+
* 0 on success, or a negative error in case of failure.
5072+
*
5073+
* long bpf_xdp_store_bytes(struct xdp_buff *xdp_md, u32 offset, void *buf, u32 len)
5074+
* Description
5075+
* Store *len* bytes from buffer *buf* into the frame
5076+
* associated to *xdp_md*, at *offset*.
5077+
* Return
5078+
* 0 on success, or a negative error in case of failure.
50525079
*/
50535080
#define __BPF_FUNC_MAPPER(FN) \
50545081
FN(unspec), \
@@ -5239,6 +5266,9 @@ union bpf_attr {
52395266
FN(get_func_arg_cnt), \
52405267
FN(get_retval), \
52415268
FN(set_retval), \
5269+
FN(xdp_get_buff_len), \
5270+
FN(xdp_load_bytes), \
5271+
FN(xdp_store_bytes), \
52425272
/* */
52435273

52445274
/* integer value in 'imm' field of BPF_CALL instruction selects which helper

kernel/bpf/arraymap.c

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -837,13 +837,12 @@ static int fd_array_map_delete_elem(struct bpf_map *map, void *key)
837837
static void *prog_fd_array_get_ptr(struct bpf_map *map,
838838
struct file *map_file, int fd)
839839
{
840-
struct bpf_array *array = container_of(map, struct bpf_array, map);
841840
struct bpf_prog *prog = bpf_prog_get(fd);
842841

843842
if (IS_ERR(prog))
844843
return prog;
845844

846-
if (!bpf_prog_array_compatible(array, prog)) {
845+
if (!bpf_prog_map_compatible(map, prog)) {
847846
bpf_prog_put(prog);
848847
return ERR_PTR(-EINVAL);
849848
}
@@ -1071,7 +1070,6 @@ static struct bpf_map *prog_array_map_alloc(union bpf_attr *attr)
10711070
INIT_WORK(&aux->work, prog_array_map_clear_deferred);
10721071
INIT_LIST_HEAD(&aux->poke_progs);
10731072
mutex_init(&aux->poke_mutex);
1074-
spin_lock_init(&aux->owner.lock);
10751073

10761074
map = array_map_alloc(attr);
10771075
if (IS_ERR(map)) {

kernel/bpf/core.c

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1829,28 +1829,30 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
18291829
}
18301830
#endif
18311831

1832-
bool bpf_prog_array_compatible(struct bpf_array *array,
1833-
const struct bpf_prog *fp)
1832+
bool bpf_prog_map_compatible(struct bpf_map *map,
1833+
const struct bpf_prog *fp)
18341834
{
18351835
bool ret;
18361836

18371837
if (fp->kprobe_override)
18381838
return false;
18391839

1840-
spin_lock(&array->aux->owner.lock);
1841-
1842-
if (!array->aux->owner.type) {
1840+
spin_lock(&map->owner.lock);
1841+
if (!map->owner.type) {
18431842
/* There's no owner yet where we could check for
18441843
* compatibility.
18451844
*/
1846-
array->aux->owner.type = fp->type;
1847-
array->aux->owner.jited = fp->jited;
1845+
map->owner.type = fp->type;
1846+
map->owner.jited = fp->jited;
1847+
map->owner.xdp_has_frags = fp->aux->xdp_has_frags;
18481848
ret = true;
18491849
} else {
1850-
ret = array->aux->owner.type == fp->type &&
1851-
array->aux->owner.jited == fp->jited;
1850+
ret = map->owner.type == fp->type &&
1851+
map->owner.jited == fp->jited &&
1852+
map->owner.xdp_has_frags == fp->aux->xdp_has_frags;
18521853
}
1853-
spin_unlock(&array->aux->owner.lock);
1854+
spin_unlock(&map->owner.lock);
1855+
18541856
return ret;
18551857
}
18561858

@@ -1862,13 +1864,11 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
18621864
mutex_lock(&aux->used_maps_mutex);
18631865
for (i = 0; i < aux->used_map_cnt; i++) {
18641866
struct bpf_map *map = aux->used_maps[i];
1865-
struct bpf_array *array;
18661867

1867-
if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY)
1868+
if (!map_type_contains_progs(map))
18681869
continue;
18691870

1870-
array = container_of(map, struct bpf_array, map);
1871-
if (!bpf_prog_array_compatible(array, fp)) {
1871+
if (!bpf_prog_map_compatible(map, fp)) {
18721872
ret = -EINVAL;
18731873
goto out;
18741874
}

0 commit comments

Comments
 (0)