This is an attempt to make BTF maps working in Aya eBPF.
The work on BTF map support is tracked and discussed in:
To make sure that we can build and compare .debug_info
and BTF info of eBPF
programs build in Rust and C, the ebpf/ subdirectory
has two programs:
- eBPF program written in Rust with Aya.
- eBPF program written in C with libbpf,
where we take its
.debug_info
and BTF info as a reference point.
Then we have four userspace projects which are meant to test every combination of Aya and libbpf, both in userspace and eBPF:
userspace-libbpf-ebpf-aya
- the most important one for us, which we need to make working. It's loading th eBPF program written in Aya with libbpf. NOT WORKING CURRENTLYuserspace-libbpf-ebpf-libbpf
- a reference point, using libbpf on both sides, which always works.userspace-aya-ebpf-aya
- Aya used on both sides,userspace-aya-ebpf-libbpf
- Aya used in usespace to load a correct libbpf program. It works, but there is a room for improvement, like handling section / program names.
- Install a rust stable toolchain:
rustup install stable
- Install a rust nightly toolchain:
rustup install nightly
You need to use this fork and branch of LLVM.
After you clone it somewhere and enter its directory, build LLVM with the following commands:
WARNING! This example with debug build requires at least 32 GB RAM to build in reasonable time.
mkdir build
cd build
CC=clang CXX=clang++ cmake -DCMAKE_BUILD_TYPE=Debug -DLLVM_PARALLEL_LINK_JOBS=1 -DLLVM_ENABLE_LLD=1 -DLLVM_BUILD_LLVM_DYLIB=1 -GNinja ../llvm/
ninja
LLVM_PARALLEL_LINK_JOBS
ensures that linking is done with only 1 core. Using
lld and clang(++) makes the build faster.
If you encounter any problems with OOM killer or your machine being unusable, you can trim down the number of ninja threads:
ninja -j[number_of_threads]
It's also helpful to resize the Swap to match your RAM size and use above command with -l 1
to reduce overhead on the CPU usage because of expensive linking. That way the build is parallel with sequential linking.
If you still have problems or have less than 64GB, try a release build:
CC=clang CXX=clang++ cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_PARALLEL_LINK_JOBS=1 -DLLVM_ENABLE_LLD=1 -GNinja ../llvm/
ninja
You need to use this fork and branch of bpf-linker.
After cloning and entering the directory, we need to install bpf-linker with
system-llvm feature and point to the patched build with LLVM_SYS_160_PREFIX
variable:
LLVM_SYS_160_PREFIX=[path_to_your_llvm_repo]/build cargo install --path . --no-default-features --features system-llvm bpf-linker
For example:
LLVM_SYS_160_PREFIX=/home/vadorovsky/repos/llvm-project/build cargo install --path . --no-default-features --features system-llvm bpf-linker
The main difference between this project and all the current Aya examples is that it generates the full debug info in eBPF crate in all profiles. It's necessary for generating BTF. So please note the:
debug = 2
option in Cargo.toml in all profiles.
To build both eBPF programs (Aya and libbpf), use:
cargo xtask build-ebpf
Aya eBPF object will be available as ./target/bpfel-unknown-none/debug/fork
.
libbpf eBPF object will be available as ./ebpf/libbpf/fork.bpf.o
.
To perform a release build you can use the --release
flag.
You may also change the target architecture with the --target
flag
You can build only a libbpf eBPF program with:
cd ebpf/libbpf
make
$ readelf -S ./target/bpfel-unknown-none/debug/fork
There are 26 section headers, starting at offset 0x22710:
Section Headers:
[Nr] Name Type Address Offset
Size EntSize Flags Link Info Align
[...]
[ 5] .maps PROGBITS 0000000000000000 000001c8
[...]
[ 9] .debug_info PROGBITS 0000000000000000 0000092b
0000000000004e99 0000000000000000 0 0 1
[...]
[17] .BTF PROGBITS 0000000000000000 000174c0
0000000000000697 0000000000000000 0 0 4
[18] .rel.BTF REL 0000000000000000 00022338
0000000000000010 0000000000000010 I 25 17 8
[19] .BTF.ext PROGBITS 0000000000000000 00017b58
0000000000000220 0000000000000000 0 0 4
[20] .rel.BTF.ext REL 0000000000000000 00022348
00000000000001f0 0000000000000010 I 25 19 8
[21] .debug_frame PROGBITS 0000000000000000 00017d78
0000000000000058 0000000000000000 0 0 8
[...]
If those sections aren't there, it means that something went wrong with building LLVM or/and bpf-linker.
You can also dump BTF info with:
$ bpftool btf dump file ./target/bpfel-unknown-none/debug/fork
[1] PTR '(anon)' type_id=3
[2] INT 'i32' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
[3] ARRAY '(anon)' type_id=2 index_type_id=4 nr_elems=1
[4] INT '__ARRAY_SIZE_TYPE__' size=4 bits_offset=0 nr_bits=32 encoding=(none)
[5] PTR '(anon)' type_id=2
[6] PTR '(anon)' type_id=7
[7] ARRAY '(anon)' type_id=2 index_type_id=4 nr_elems=1024
[8] PTR '(anon)' type_id=9
[9] ARRAY '(anon)' type_id=2 index_type_id=4 nr_elems=0
[10] STRUCT '(anon)' size=40 vlen=5
'type' type_id=1 bits_offset=0
'key' type_id=5 bits_offset=64
'value' type_id=5 bits_offset=128
'max_entries' type_id=6 bits_offset=192
[12] PTR '(anon)' type_id=13
[13] ENUM 'c_void' encoding=UNSIGNED size=1 vlen=2
'__variant1' val=0
'__variant2' val=1
[14] FUNC_PROTO '(anon)' ret_type_id=15 vlen=1
'ctx' type_id=12
[15] INT 'u32' size=4 bits_offset=0 nr_bits=32 encoding=(none)
[16] FUNC 'fork' type_id=14 linkage=global
[17] PTR '(anon)' type_id=18
[18] INT 'u8' size=1 bits_offset=0 nr_bits=8 encoding=(none)
[19] INT 'usize' size=8 bits_offset=0 nr_bits=64 encoding=(none)
[20] FUNC_PROTO '(anon)' ret_type_id=0 vlen=3
's' type_id=17
'c' type_id=2
'n' type_id=19
[21] FUNC 'memset' type_id=20 linkage=global
[22] FUNC_PROTO '(anon)' ret_type_id=0 vlen=3
'dest' type_id=17
'src' type_id=17
'n' type_id=19
[23] FUNC 'memcpy' type_id=22 linkage=global
[24] DATASEC '.maps' size=0 vlen=1
type_id=11 offset=0 size=40 (VAR 'PID_MAP')
The part of work is to also do the similar check for the libbpf eBPF program, like:
readelf -S ./ebpf/fork-ebpf-libbpf/fork.bpf.o
bpftool btf dump file ./ebpf/fork-ebpf-libbpf/fork.bpf.o
You can build all the userspace crates with:
cargo build
Note that you need eBPF programs compiled first.
For convenience, we also have an xtask command run
, which builds and runs a
requested combination of userspace and eBPF libraries in one command.
By default, without additional arguments, it's running with libbpf as a userspace lib and Aya as an eBPF lib:
RUST_LOG=info cargo xtask run
That command is equivalent to:
RUST_LOG=info cargo xtask run --ebpf-lib aya --userspace-lib libbpf
But you can request any other combination! For example:
RUST_LOG=info cargo xtask run --ebpf-lib aya --userspace-lib aya
RUST_LOG=info cargo xtask run --ebpf-lib libbpf --userspace-lib aya
RUST_LOG=info cargo xtask run --ebpf-lib libbpf --userspace-lib libbpf
Both eBPF programs (Aya and libbpf) are using bpf_printk
, so you can check
the debug messages with:
sudo bpftool prog tracelog