Load Testing, CPU Cores And Sample ULCL Config #487
-
Hi, I just came across your UPF implementation and will like to clarify the following:
Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi @infinitydon,
The test results a little bit outdated, actually. But the main reason of poor perfomance was using eBPF XDP program in generic attach mode. In this mode no eBPF benefits are avaliable. In pure K8s deployment with veth interfaces generic mode is the only option. In order to achieve better performance results is necessary to use at least native attach mode. We are going to share new performance tests results on bare metal environment where native mode was used.
Yes. It corresponds to rx queues count configured in network interface(port).
Please, check deployments examples in the repository --BR Alex |
Beta Was this translation helpful? Give feedback.
-
Of course, using PCI-Passthrough or SR-IOV will definitely improve performance, but cloud compatibility/nativeness is less in this case. I think, using eBPF native datapath from Cilium/Calico could improve performace as well. For this option it's possible to use XDP eBPF in native mode for veth driver in container(eBPF XDP programs have to be loaded on both interfaces of veth pair to use native mode). --BR Alex |
Beta Was this translation helpful? Give feedback.
-
BTW, I'll add the issue to support mentioned scenario in our backlog |
Beta Was this translation helpful? Give feedback.
Of course, using PCI-Passthrough or SR-IOV will definitely improve performance, but cloud compatibility/nativeness is less in this case.
I think, using eBPF native datapath from Cilium/Calico could improve performace as well. For this option it's possible to use XDP eBPF in native mode for veth driver in container(eBPF XDP programs have to be loaded on both interfaces of veth pair to use native mode).
--BR Alex