-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug - Mixed speed broadband only measures the Maximum upload not the download #1374
Comments
@DavidACraig1975, since you didn't include the command line you used to run iperf3, it is not clear how you did the test. In general, the default mode of iperf3 is to send data only from the client to the server. If the client is sending using the uplink then the rate will be limited by the uplink rate. The server data in this case is the amount of data received, not sent. If you want that the server will send data to the client, use the By the way, note that you are using a very old version of iperf3 and it is recommended to use a newer version. |
The same problem: iperf3 -sand the same host: iperf3 -c 127.0.0.1 --bidirConnecting to host 127.0.0.1, port 5201 [ ID][Role] Interval Transfer Bitrate Retr iperf Done. ps |
@AnatoliChe, it is very strange that the RX rate is exactly 10% of the TX rate. Which iperf3 version to you use ( |
iperf3 -v cat /etc/debian_version |
I tried the same test on my machine, using both iperf3 versions 3.7 and 3.12. On both I get about 30Gbps for both TX and RX. I also briefly searched the code changes since 3.9 and I don't see any change that is related to the bandwidth calculation or display. Can you try running the test once without the |
Sure! take new server Dell R350, Intel(R) Xeon(R) E-2388G CPU @ 3.20GHz new clear install Debian
`iperf3 -c 127.0.0.1 [ ID] Interval Transfer Bitrate Retr iperf Done. `iperf3 -c 127.0.0.1 -R [ ID] Interval Transfer Bitrate Retr iperf Done. [ ID][Role] Interval Transfer Bitrate Retr iperf Done. [ ID][Role] Interval Transfer Bitrate Retr iperf Done.
|
Thanks for the detailed input. I am not not familiar enough with the Debian Linux settings to suggest direct evaluation of this hypothesis. However, the following tests may help:
|
With -P2 I have random results ` second: ` I belive it's limited by CPU. When I start 2 servers with diff ports and 2 client simultaneously I have And have top: So I pretty sure it's problem of architecture of iperf3 which uses only one core. In bidir mode it's a problem... |
I agree this is an issue, and in this case the single CPU performance seem to limit the total throughput as you suggest. The only option available is to run the server and client on different CPUs, using the However, I still don't understand what is the reason that one stream throughput is only 10% of the other. Even if only one CPU is used, then usually the throughput is evenly divided between the streams. Can it depend on the specific CPUs allocated for the server and the client? If this is the case, then using the Also, if you have any suggestion/guess about the reason of the 10% throughput this will be very helpful for understanding the iperf3 limitations and suggested usage. |
if you take a look at htop, you can notice server takes 100% of CPU, client only 80%. iperf3 -s -A0-4 Server listening on 5201 (test #1) Accepted connection from 127.0.0.1, port 61082 [ ID][Role] Interval Transfer Bitrate Retr /tmp/iperf/src# ./iperf3 -c 127.0.0.1 --bidir -A5-8 -t 120 iperf Done. htop 0[|||||||||||||||||||||||||||||||||||||||||||100.0%] 4[ 0.0%] 8[ 0.0%] 12[ 0.0%] PID USER PRI NI VIRT RES SHR S CPU%▽MEM% TIME+ Command |
I now understand why you suggest that the single CPU allocation may be the issue, and I agree that it seem to be a better explanation then my suggestion about buffers usage. There is one issue about the htop output that I don't understand. Per iperf3 code and help (and I also tried it) the It will help if you can run the test again - once with client option like Note that on my machine, with
And
|
cpu 0 |
iperf3 -s -A 1 iperf Done.
Mem[||| 883M/126G] Tasks: 55, 110 thr; 3 running
22640 root 20 0 7252 3680 2996 R 99.3 0.0 3:42.44 /tmp/iperf/src/.libs/iperf3 -s -A 1 at rhe same CPU server and client: iperf Done. |
Oops! My mistake. I read the line number as the CPU number .... Thanks for sending the additional results. It shows that the 10% throughput issue is when the server and client are running on different CPUs. However, although using the same CPU seem to be better, "improving" to exactly two thirds (66.7%) is strange. Again, on my machine it is about the same performance on both directions. I also don't see anything in the iperf3 code that can lead to such throughput distribution. I suspect that the throughput difference (the exact 10% and two thirds) is related to system settings, although I don't know what they may be. Both the priority and nice values are the same for both processes. Reading the Debian SCHED(7) man page, I don't see any scheduling policy settings that can lead this behavior. I assume that the Currently I don't have further suggestions for how to evaluate this issue, except for evaluating the system settings. |
I had the same issue on a fiber "marketed 100/50" Mbps FTTH Line (provisioned around 113/56). Two things that worked for me: setting fq-rate to the higher limit together with
or selecting a different congestion control algorithm
|
Context Testing from WFH Workstations at 1Gbps Down and 50Mbps up to Business Network Connection at 1Gbps Up/Down
Version of iperf3: 3.1.3
Hardware: mixed
Operating system (and distribution, if any): Windows
Bug Report
Expected Behavior - would expect that the results would show a download bandwidth of 1000Mbps and and up of 50Mbps
Actual Behavior - Both up and down seem to be getting the max upload speed of 50Mbps, (works fine between local network devices)
Steps to Reproduce - Run on consumer broadband with different upload and download speeds
Possible Solution - I am guessing it is sending traffic up and pulling the same traffic down causing the bottle neck on the upload speed. maybe it would be better to initiate a download from the server and initiate a upload to the server from the client.
Enhancement Request
Current behavior
Desired behavior - to clearly state the upload speed and download speed, in a normal LAN this could also indicate a wireing issue if it is not the same.
Implementation notes
The text was updated successfully, but these errors were encountered: