Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDP uplink absurd values #1352

Closed
dk1301 opened this issue Jun 7, 2022 · 6 comments
Closed

UDP uplink absurd values #1352

dk1301 opened this issue Jun 7, 2022 · 6 comments

Comments

@dk1301
Copy link

dk1301 commented Jun 7, 2022

Context

  • Version of iperf3:3.11

  • Hardware: amd64

  • Operating system (and distribution, if any): Ubuntu 18.04

Bug Report

Hi all I have installed iperf3 on my Ubuntu Vm18.04 and try to measure the throughput via a private server that I have installed on a Azure VM.

The VDSL connection that I'm testing is 50Mbps/5Mbps (downlink/uplink)

The reverse downlink measurement for TCP and UDP seems to measure near the real throughput of 50Mbps using the following commands :

iperf3 -P4 -c host -p port -R for TCP

[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 9.48 MBytes 15.9 Mbits/sec 190 sender
[ 5] 0.00-5.00 sec 6.41 MBytes 10.8 Mbits/sec receiver
[ 7] 0.00-5.00 sec 9.78 MBytes 16.4 Mbits/sec 267 sender
[ 7] 0.00-5.00 sec 7.53 MBytes 12.6 Mbits/sec receiver
[ 9] 0.00-5.00 sec 8.76 MBytes 14.7 Mbits/sec 91 sender
[ 9] 0.00-5.00 sec 6.89 MBytes 11.6 Mbits/sec receiver
[ 11] 0.00-5.00 sec 5.79 MBytes 9.71 Mbits/sec 58 sender
[ 11] 0.00-5.00 sec 4.63 MBytes 7.76 Mbits/sec receiver
[SUM] 0.00-5.00 sec 33.8 MBytes 56.7 Mbits/sec 606 sender
[SUM] 0.00-5.00 sec 25.5 MBytes 42.7 Mbits/sec receiver

iperf Done.

iperf3 -u -b0 -c host -p port -R for UDP

[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 5.68 MBytes 47.6 Mbits/sec 0.149 ms 79986/84063 (95%)
[ 5] 1.00-2.00 sec 5.67 MBytes 47.6 Mbits/sec 0.070 ms 121337/125409 (97%)
[ 5] 2.00-3.00 sec 5.49 MBytes 46.1 Mbits/sec 0.073 ms 138351/142295 (97%)
[ 5] 3.00-4.00 sec 5.25 MBytes 44.0 Mbits/sec 0.055 ms 143479/147248 (97%)
[ 5] 4.00-5.00 sec 5.48 MBytes 46.0 Mbits/sec 0.111 ms 146540/150476 (97%)


[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 1.00 GBytes 1.72 Gbits/sec 0.000 ms 0/649491 (0%) sender
[ 5] 0.00-5.00 sec 27.6 MBytes 46.2 Mbits/sec 0.111 ms 629693/649491 (97%) receiver

Even though for UDP case still a lot of lost datagrams.

The uplink measurement works fine for TCP using the above command without the R

The problematic values occur when I try to measure the uplink with UDP e.x.

iperf3 -u -b10m -c host -p port -t5 gives
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.01 sec 1.18 MBytes 9.76 Mbits/sec 844
[ 5] 1.01-2.00 sec 1.21 MBytes 10.2 Mbits/sec 868
[ 5] 2.00-3.00 sec 1.19 MBytes 10.0 Mbits/sec 857
[ 5] 3.00-4.00 sec 1.19 MBytes 9.99 Mbits/sec 855
[ 5] 4.00-5.00 sec 1.19 MBytes 10.0 Mbits/sec 857


[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 5.96 MBytes 10.0 Mbits/sec 0.000 ms 0/4281 (0%) sender
[ 5] 0.00-5.00 sec 3.27 MBytes 5.49 Mbits/sec 0.912 ms 1927/4279 (45%) receiver

Meaning that uplink (sender) is 10MBps which can't be the case since the max is 5Mbps, whereas the value of receiver is closer to maximum throughput.

The default command iperf3 -u -c host -p port -t5 gives

[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 3.00-4.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 4.00-5.00 sec 127 KBytes 1.04 Mbits/sec 89


[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 640 KBytes 1.05 Mbits/sec 0.000 ms 0/449 (0%) sender
[ 5] 0.00-5.00 sec 640 KBytes 1.05 Mbits/sec 0.801 ms 0/449 (0%) receiver

which also is not the case.

When I run it with unlimited bandwidth

iperf3 -u -b0 -c host -p port -t5it gives

[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 32.0 MBytes 268 Mbits/sec 22970
[ 5] 1.00-2.00 sec 48.7 MBytes 409 Mbits/sec 35000
[ 5] 2.00-3.00 sec 50.6 MBytes 424 Mbits/sec 36310
[ 5] 3.00-4.00 sec 55.8 MBytes 469 Mbits/sec 40110
[ 5] 4.00-5.00 sec 60.1 MBytes 504 Mbits/sec 43190


[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 247 MBytes 415 Mbits/sec 0.000 ms 0/177580 (0%) sender
[ 5] 0.00-5.00 sec 3.29 MBytes 5.52 Mbits/sec 1.752 ms 175038/177400 (99%) receiver

Which again is absurd while the receiver seems more accurate to the existing uplink throughput.

What is the best way to get the maximum correct uplink throughput of a UDP measurement every interval according to your experience?
I understand that by trial n error of bandwidth one may achieve it but according to the above still is not the case.
Also the lost diagrams for sure have played a role to this abnormal values but It's essential for my app to measure every interval.

It may seem somehow related with the issue #1044.

Thank you for your time.

@davidBar-On
Copy link
Contributor

UDP is a conectionless protocol and the sender does not have any indication whether a packet reached its destination. Therefore, sending UDP packets is only limited by the CPU, memory buffers available, etc.

Note that while the sending rate was much higher than the link throughput, the received rate is limited by the link. E.g. when 10Mbps were sent on the uplink, only about 50% were received. For iperf3 -u -b0 -c host -p port -R for UDP the client (receiver) shows that less than 50Mbps were received, but the server (sender) sending rate was 1.72 Gbits/sec and about 97% of the packets were lost.

@dk1301
Copy link
Author

dk1301 commented Jun 9, 2022

First of all thanks for the quick response.
So if I understand correctly in order to measure accurately I need to place -b near the physical limit of the link ?
In my case 50M for downlink and 5M for uplink or is there another more generic way to cover this for example take the receiver value instead.
For example if I need to measure a cellular network that may consists of LTE and 5G NSA and 5G SA technologies than may change form time to time based on the network that a module may be attached to, should I place each time the the physical limit of the link at the parameter -b in order to have accurate throughput measurement ?
Or at the end UDP is not appropriate for maximum throughput measurement and use TCP instead ?

@davidBar-On
Copy link
Contributor

Or at the end UDP is not appropriate for maximum throughput measurement and use TCP instead ?

Yes n general - TCP is more appropriate to measure the maximum throughput. However, there are cases where the actual system data is UDP, so it is important to measure the UDP performance. In these cases, TCP can be used to estimate the actual throughput, and then UDP can be sent using -b to limit the throughput (as you suggested) to values near the throughput measured by TCP.

.... cellular network that may consists of LTE and 5G NSA and 5G SA technologies than may change form time to time based on the network that a module may be attached to, should I place each time the the physical limit of the link at the parameter -b in order to have accurate throughput measurement ?

In general the answer is yes. However, note that you may want to send the actual expected UDP traffic rate in the real system. This is because the overload of the network is usually causing degradation of the throughput. In such cases, -b is set to the actual or estimated UDP rate in the system.

@dk1301
Copy link
Author

dk1301 commented Jun 9, 2022

Again thanks for the valuable info sharing.
One last thing when you mention " In such cases, -b is set to the actual or estimated UDP rate in the system" you refer to physical limitation of the link that is known beforehand or a previous TCP measurement or something else that I'm missing ?
If I understand correctly let's say I want to measure the throughput of 5G SA network of 1GBps uplink capability, I've measured that TCP uplink for example which is around that value or less.
Since TCP can be less due to overload then I cross check the above with UDP and -b 1G. And then I should take into consideration the lost packets meaning that the most accurate measurement is the one with the maximum possible bandwidth closer to 1Gbps and the least possible packet loss ?
Is that the case or I'm missing something ?

@davidBar-On
Copy link
Contributor

One last thing when you mention " In such cases, -b is set to the actual or estimated UDP rate in the system" you refer to physical limitation of the link that is known beforehand or a previous TCP measurement or something else that I'm missing ?

Yes, I referred to known value using some kind of measurement, e.g. TCP. The idea is not to send UDP in much higher rate than the link can support, as in this case overheads related to memory buffers, link congestion, etc. may desegregate the throughput. This is except when the expected sending rate is know, e.g. in peak usage periods and you want to test how the system will behave in such case.

Since TCP can be less due to overload then I cross check the above with UDP and -b 1G.

Correct. Usually you will end up trying several rates, e.g. 0.9G/1G/1.1G/1.2G, to see what is the maximum rate you can achieve.

And then I should take into consideration the lost packets

The point is that you have to look on the throughput on the receiving side (the iperf3 Server without -R and the Client when -R is used). This is the throughput you got. On the receiving side you see also the number/percentage of the lost packets.

meaning that the most accurate measurement

There is no accurate or not-accurate measure. Each is accurate. You are looking for the optimal case.

is the one with the maximum possible bandwidth closer to 1Gbps and the least possible packet loss ?

This is correct. As you wrote, both the bandwidth and the packets loss rate are important, so the "best" solution may not be with the maximum bandwidth, in case its packet loss rate is considered to be too high.

@dk1301
Copy link
Author

dk1301 commented Jun 14, 2022

Thanks for all the valuable info,

@dk1301 dk1301 closed this as completed Jun 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants