Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why using the -P parameter in a container will get different results #1588

Open
tanberBro opened this issue Oct 27, 2023 · 5 comments
Open

Comments

@tanberBro
Copy link

Context

  • Version of iperf3: 3.0.7

  • Operating system (and distribution, if any): centos7, kernel: 3.10, containerd

See The Phenomenon

in different containers on different nodes(vxlan connect), use -p or not will get different results, the test results are as follows:

  • Not use -P parameter

server:

iperf3 -s -p 5001 -i1
-----------------------------------------------------------
Server listening on 5001
-----------------------------------------------------------
Accepted connection from 10.244.4.173, port 34416
[  5] local 10.244.3.53 port 5001 connected to 10.244.4.173 port 34418
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   605 MBytes  5.07 Gbits/sec
[  5]   1.00-2.00   sec   624 MBytes  5.24 Gbits/sec
[  5]   2.00-3.00   sec   713 MBytes  5.98 Gbits/sec
[  5]   3.00-4.00   sec   798 MBytes  6.70 Gbits/sec
[  5]   4.00-5.00   sec   756 MBytes  6.34 Gbits/sec
[  5]   5.00-5.04   sec  27.7 MBytes  6.60 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-5.04   sec  3.44 GBytes  5.87 Gbits/sec  3155             sender
[  5]   0.00-5.04   sec  3.44 GBytes  5.87 Gbits/sec                  receiver

agent:

iperf3 -c 10.244.3.53 -p5001 -t5
Connecting to host 10.244.3.53, port 5001
[  4] local 10.244.4.173 port 34418 connected to 10.244.3.53 port 5001
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   626 MBytes  5.25 Gbits/sec  634    343 KBytes
[  4]   1.00-2.00   sec   627 MBytes  5.26 Gbits/sec  725    442 KBytes
[  4]   2.00-3.00   sec   727 MBytes  6.10 Gbits/sec  371    419 KBytes
[  4]   3.00-4.00   sec   798 MBytes  6.69 Gbits/sec  741    388 KBytes
[  4]   4.00-5.00   sec   749 MBytes  6.28 Gbits/sec  684    456 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-5.00   sec  3.44 GBytes  5.92 Gbits/sec  3155             sender
[  4]   0.00-5.00   sec  3.44 GBytes  5.91 Gbits/sec                  receiver
  • Use -P parameter:

server:

iperf3 -s -p 5001 -i1
...
...
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-5.02   sec  1013 MBytes  1.69 Gbits/sec  303             sender
[  5]   0.00-5.02   sec  1008 MBytes  1.68 Gbits/sec                  receiver
[  7]   0.00-5.02   sec  1020 MBytes  1.70 Gbits/sec  471             sender
[  7]   0.00-5.02   sec  1015 MBytes  1.70 Gbits/sec                  receiver
[  9]   0.00-5.02   sec   996 MBytes  1.66 Gbits/sec  183             sender
[  9]   0.00-5.02   sec   993 MBytes  1.66 Gbits/sec                  receiver
[ 11]   0.00-5.02   sec   987 MBytes  1.65 Gbits/sec  317             sender
[ 11]   0.00-5.02   sec   984 MBytes  1.64 Gbits/sec                  receiver
[ 13]   0.00-5.02   sec  1004 MBytes  1.68 Gbits/sec  303             sender
[ 13]   0.00-5.02   sec  1001 MBytes  1.67 Gbits/sec                  receiver
[SUM]   0.00-5.02   sec  4.90 GBytes  8.39 Gbits/sec  1577             sender
[SUM]   0.00-5.02   sec  4.88 GBytes  8.36 Gbits/sec                  receiver

agent:

iperf3 -c 10.244.3.53 -p5001 -t5 -P5
...
...
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-5.00   sec  1013 MBytes  1.70 Gbits/sec  303             sender
[  4]   0.00-5.00   sec  1008 MBytes  1.69 Gbits/sec                  receiver
[  6]   0.00-5.00   sec  1020 MBytes  1.71 Gbits/sec  471             sender
[  6]   0.00-5.00   sec  1015 MBytes  1.70 Gbits/sec                  receiver
[  8]   0.00-5.00   sec   996 MBytes  1.67 Gbits/sec  183             sender
[  8]   0.00-5.00   sec   993 MBytes  1.67 Gbits/sec                  receiver
[ 10]   0.00-5.00   sec   987 MBytes  1.66 Gbits/sec  317             sender
[ 10]   0.00-5.00   sec   984 MBytes  1.65 Gbits/sec                  receiver
[ 12]   0.00-5.00   sec  1004 MBytes  1.68 Gbits/sec  303             sender
[ 12]   0.00-5.00   sec  1001 MBytes  1.68 Gbits/sec                  receiver
[SUM]   0.00-5.00   sec  4.90 GBytes  8.42 Gbits/sec  1577             sender
[SUM]   0.00-5.00   sec  4.88 GBytes  8.39 Gbits/sec                  receiver

My Doubt

  • why use or not -P parameter get different results?
  • Should I use the-P parameter to test the bandwidth? Looks closer to actual bandwidth (10G) with-P parameter.
  • Why is it possible to achieve a bandwidth close to 10G without using the-P parameter on bare metal?
@bmah888
Copy link
Contributor

bmah888 commented Oct 27, 2023

This is a pretty old version of iperf3 (3.0.7, which is about 9 years old), the newest version is 3.16. In between, we've had a lot of bug fixes and enhancements.

Regardless...you might be in a situation where the maximum window size is limiting your TCP performance. Maybe try your "without -P" tests with various values for the -w parameter (maybe 64K, 128K, 256K, etc.)? Note that Linux might automatically do some socket buffer tuning, so this might not have any effect. Also we see this issue more on long-latency paths, not usually between two VMs on the same hypervisor.

@tanberBro
Copy link
Author

This is a pretty old version of iperf3 (3.0.7, which is about 9 years old), the newest version is 3.16. In between, we've had a lot of bug fixes and enhancements.

Regardless...you might be in a situation where the maximum window size is limiting your TCP performance. Maybe try your "without -P" tests with various values for the -w parameter (maybe 64K, 128K, 256K, etc.)? Note that Linux might automatically do some socket buffer tuning, so this might not have any effect. Also we see this issue more on long-latency paths, not usually between two VMs on the same hypervisor.

Using the -w parameter will result in a smaller bandwidth, which is set to a maximum of 416K, and cannot reach the bandwidth when the-w parameter is not used.

However, I only use the-A parameter and do not use the-P and-w parameters. I will get the same bandwidth as using the -P parameter, which looks like the actual bandwidth.

What is the principle behind the -A parameter?

@davidBar-On
Copy link
Contributor

Using the -w parameter will result in a smaller bandwidth, which is set to a maximum of 416K, and cannot reach the bandwidth when the-w parameter is not used.

Not sure if this will help, but did you try using a newer version if iperf3? I believe that it is not possible to tell whether the the behavior you see is what is should be or because you are using the old version 3.0.7.

What is the principle behind the -A parameter?

-A sets the CPU core that iperf3 will use. I believe that the help message for the option is wrong, and it should be something like:

-A, --affinity n[,m]      set CPU affinity core number to n (the core the process will use).
                          m is Client only option - set the Server's core number for this test.

@tanberBro
Copy link
Author

Thank you very much!

Therefore, I think the container uses a shared CPU, which affects the test results. Each request may be processed by a different CPU, resulting in a decrease in bandwidth.

Using the -A parameter, binding to a specific CPU, bandwidth rises

@tanberBro
Copy link
Author

@davidBar-On

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants