-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP uplink absurd values #1352
Comments
UDP is a conectionless protocol and the sender does not have any indication whether a packet reached its destination. Therefore, sending UDP packets is only limited by the CPU, memory buffers available, etc. Note that while the sending rate was much higher than the link throughput, the received rate is limited by the link. E.g. when 10Mbps were sent on the uplink, only about 50% were received. For |
First of all thanks for the quick response. |
Yes n general - TCP is more appropriate to measure the maximum throughput. However, there are cases where the actual system data is UDP, so it is important to measure the UDP performance. In these cases, TCP can be used to estimate the actual throughput, and then UDP can be sent using
In general the answer is yes. However, note that you may want to send the actual expected UDP traffic rate in the real system. This is because the overload of the network is usually causing degradation of the throughput. In such cases, |
Again thanks for the valuable info sharing. |
Yes, I referred to known value using some kind of measurement, e.g. TCP. The idea is not to send UDP in much higher rate than the link can support, as in this case overheads related to memory buffers, link congestion, etc. may desegregate the throughput. This is except when the expected sending rate is know, e.g. in peak usage periods and you want to test how the system will behave in such case.
Correct. Usually you will end up trying several rates, e.g. 0.9G/1G/1.1G/1.2G, to see what is the maximum rate you can achieve.
The point is that you have to look on the throughput on the receiving side (the iperf3 Server without
There is no accurate or not-accurate measure. Each is accurate. You are looking for the optimal case.
This is correct. As you wrote, both the bandwidth and the packets loss rate are important, so the "best" solution may not be with the maximum bandwidth, in case its packet loss rate is considered to be too high. |
Thanks for all the valuable info, |
Context
Version of iperf3:3.11
Hardware: amd64
Operating system (and distribution, if any): Ubuntu 18.04
Bug Report
Hi all I have installed iperf3 on my Ubuntu Vm18.04 and try to measure the throughput via a private server that I have installed on a Azure VM.
The VDSL connection that I'm testing is 50Mbps/5Mbps (downlink/uplink)
The reverse downlink measurement for TCP and UDP seems to measure near the real throughput of 50Mbps using the following commands :
iperf3 -P4 -c host -p port -R
for TCP[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 9.48 MBytes 15.9 Mbits/sec 190 sender
[ 5] 0.00-5.00 sec 6.41 MBytes 10.8 Mbits/sec receiver
[ 7] 0.00-5.00 sec 9.78 MBytes 16.4 Mbits/sec 267 sender
[ 7] 0.00-5.00 sec 7.53 MBytes 12.6 Mbits/sec receiver
[ 9] 0.00-5.00 sec 8.76 MBytes 14.7 Mbits/sec 91 sender
[ 9] 0.00-5.00 sec 6.89 MBytes 11.6 Mbits/sec receiver
[ 11] 0.00-5.00 sec 5.79 MBytes 9.71 Mbits/sec 58 sender
[ 11] 0.00-5.00 sec 4.63 MBytes 7.76 Mbits/sec receiver
[SUM] 0.00-5.00 sec 33.8 MBytes 56.7 Mbits/sec 606 sender
[SUM] 0.00-5.00 sec 25.5 MBytes 42.7 Mbits/sec receiver
iperf Done.
iperf3 -u -b0 -c host -p port -R
for UDP[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 5.68 MBytes 47.6 Mbits/sec 0.149 ms 79986/84063 (95%)
[ 5] 1.00-2.00 sec 5.67 MBytes 47.6 Mbits/sec 0.070 ms 121337/125409 (97%)
[ 5] 2.00-3.00 sec 5.49 MBytes 46.1 Mbits/sec 0.073 ms 138351/142295 (97%)
[ 5] 3.00-4.00 sec 5.25 MBytes 44.0 Mbits/sec 0.055 ms 143479/147248 (97%)
[ 5] 4.00-5.00 sec 5.48 MBytes 46.0 Mbits/sec 0.111 ms 146540/150476 (97%)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 1.00 GBytes 1.72 Gbits/sec 0.000 ms 0/649491 (0%) sender
[ 5] 0.00-5.00 sec 27.6 MBytes 46.2 Mbits/sec 0.111 ms 629693/649491 (97%) receiver
Even though for UDP case still a lot of lost datagrams.
The uplink measurement works fine for TCP using the above command without the R
The problematic values occur when I try to measure the uplink with UDP e.x.
iperf3 -u -b10m -c host -p port -t5 gives
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.01 sec 1.18 MBytes 9.76 Mbits/sec 844
[ 5] 1.01-2.00 sec 1.21 MBytes 10.2 Mbits/sec 868
[ 5] 2.00-3.00 sec 1.19 MBytes 10.0 Mbits/sec 857
[ 5] 3.00-4.00 sec 1.19 MBytes 9.99 Mbits/sec 855
[ 5] 4.00-5.00 sec 1.19 MBytes 10.0 Mbits/sec 857
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 5.96 MBytes 10.0 Mbits/sec 0.000 ms 0/4281 (0%) sender
[ 5] 0.00-5.00 sec 3.27 MBytes 5.49 Mbits/sec 0.912 ms 1927/4279 (45%) receiver
Meaning that uplink (sender) is 10MBps which can't be the case since the max is 5Mbps, whereas the value of receiver is closer to maximum throughput.
The default command
iperf3 -u -c host -p port -t5
gives[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 3.00-4.00 sec 128 KBytes 1.05 Mbits/sec 90
[ 5] 4.00-5.00 sec 127 KBytes 1.04 Mbits/sec 89
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 640 KBytes 1.05 Mbits/sec 0.000 ms 0/449 (0%) sender
[ 5] 0.00-5.00 sec 640 KBytes 1.05 Mbits/sec 0.801 ms 0/449 (0%) receiver
which also is not the case.
When I run it with unlimited bandwidth
iperf3 -u -b0 -c host -p port -t5
it gives[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 32.0 MBytes 268 Mbits/sec 22970
[ 5] 1.00-2.00 sec 48.7 MBytes 409 Mbits/sec 35000
[ 5] 2.00-3.00 sec 50.6 MBytes 424 Mbits/sec 36310
[ 5] 3.00-4.00 sec 55.8 MBytes 469 Mbits/sec 40110
[ 5] 4.00-5.00 sec 60.1 MBytes 504 Mbits/sec 43190
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-5.00 sec 247 MBytes 415 Mbits/sec 0.000 ms 0/177580 (0%) sender
[ 5] 0.00-5.00 sec 3.29 MBytes 5.52 Mbits/sec 1.752 ms 175038/177400 (99%) receiver
Which again is absurd while the receiver seems more accurate to the existing uplink throughput.
What is the best way to get the maximum correct uplink throughput of a UDP measurement every interval according to your experience?
I understand that by trial n error of bandwidth one may achieve it but according to the above still is not the case.
Also the lost diagrams for sure have played a role to this abnormal values but It's essential for my app to measure every interval.
It may seem somehow related with the issue #1044.
Thank you for your time.
The text was updated successfully, but these errors were encountered: