Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Subsequent iperf tcp runs yield much less throughput #134

Closed
bmah888 opened this issue Feb 28, 2014 · 5 comments
Closed

Subsequent iperf tcp runs yield much less throughput #134

bmah888 opened this issue Feb 28, 2014 · 5 comments
Labels

Comments

@bmah888
Copy link
Contributor

bmah888 commented Feb 28, 2014

From nsrirao on January 16, 2014 13:59:20

What steps will reproduce the problem? 1. iperf3 -s -V on srv1;iperf3 -c srv1 -V on srv2; (Note srv2 has just been rebooted.)
2. Note the TCP throughput. On my setup (10Gb link) it shows about 9.41Gbps after the first test
3. Run the client again. The throughput shown is about 300Mbps.
4. Run the client again and see that the throughput does not change ; still is aroung 300Mbps What is the expected output? What do you see instead? Expected to see the pipe being filled - about 9.41Gbps always. What version of the product are you using? On what operating system? 3.0.1 Please provide any additional information below. iperf version 3.0.1 (10 January 2014)
Linux srv2 3.8.0-35-generic #50~precise1-Ubuntu SMP Wed Dec 4 17:25:51 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Original issue: http://code.google.com/p/iperf/issues/detail?id=134

@bmah888
Copy link
Contributor Author

bmah888 commented Feb 28, 2014

From bmah@es.net on February 03, 2014 14:20:35

Hrm. Are you still seeing this? I haven't seen anything like this personally, and I can't figure why you would get different results on runs after the first one. I've done tests on 10G and 40G links on our testbed at ESnet and haven't observed this.

Out of curiosity what 10G NICs are you using?

@ezaton
Copy link

ezaton commented Mar 5, 2014

I got somewhat similar issue, In my case, running iperf with -V (verbose) flag yields better results... See here:
]# ./iperf3 -c 192.168.10.8 ; ./iperf3 -c 192.168.10.8 -V
Connecting to host 192.168.10.8, port 5201
[ 4] local 192.168.10.1 port 11353 connected to 192.168.10.8 port 5201
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-1.00 sec 529 MBytes 4.44 Gbits/sec 0
[ 4] 1.00-2.00 sec 529 MBytes 4.44 Gbits/sec 0
[ 4] 2.00-3.00 sec 529 MBytes 4.44 Gbits/sec 0
[ 4] 3.00-4.00 sec 528 MBytes 4.43 Gbits/sec 0
[ 4] 4.00-5.00 sec 529 MBytes 4.44 Gbits/sec 0
[ 4] 5.00-6.00 sec 527 MBytes 4.42 Gbits/sec 0
[ 4] 6.00-7.00 sec 526 MBytes 4.41 Gbits/sec 0
[ 4] 7.00-8.00 sec 528 MBytes 4.43 Gbits/sec 0
[ 4] 8.00-9.00 sec 526 MBytes 4.41 Gbits/sec 0
[ 4] 9.00-10.00 sec 528 MBytes 4.43 Gbits/sec 0


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 5.16 GBytes 4.43 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 5.16 GBytes 4.43 Gbits/sec receiver

iperf Done.
iperf version 3.0.1 (10 January 2014)
Linux mdw.mgmt 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
Time: Wed, 05 Mar 2014 22:53:12 GMT
Connecting to host 192.168.10.8, port 5201
Cookie: mdw.mgmt.1394059992.754207.0652fb654
TCP MSS: 1448 (default)
[ 4] local 192.168.10.1 port 11355 connected to 192.168.10.8 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-1.00 sec 883 MBytes 7.41 Gbits/sec 0
[ 4] 1.00-2.00 sec 883 MBytes 7.40 Gbits/sec 0
[ 4] 2.00-3.00 sec 878 MBytes 7.37 Gbits/sec 0
[ 4] 3.00-4.00 sec 876 MBytes 7.35 Gbits/sec 0
[ 4] 4.00-5.00 sec 877 MBytes 7.36 Gbits/sec 0
[ 4] 5.00-6.00 sec 877 MBytes 7.36 Gbits/sec 0
[ 4] 6.00-7.00 sec 878 MBytes 7.36 Gbits/sec 0
[ 4] 7.00-8.00 sec 877 MBytes 7.35 Gbits/sec 0
[ 4] 8.00-9.00 sec 879 MBytes 7.38 Gbits/sec 0
[ 4] 9.00-10.00 sec 877 MBytes 7.35 Gbits/sec 0


Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 8.58 GBytes 7.37 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 8.58 GBytes 7.37 Gbits/sec receiver
CPU Utilization: local/sender 99.4% (0.4%u/99.1%s), remote/receiver 49.1% (1.8%u/47.3%s)

iperf Done.

Using Intel ixgbe 10GbE cards.
Would love to assist with finding a solution.

@bmah888
Copy link
Contributor Author

bmah888 commented May 12, 2014

We noticed in testing with 40Gbps Mellanox interfaces on some servers on the ESnet 100G testbed, performance can vary widely depending on the CPU core that iperf3 happens to find itself running on. Since the iperf3 process is assigned to a processor more or less at random, the throughput appears to vary widely and also at random. This looks to be very similar. (Issue #55)

With respect to the most recent comment on this issue, correlation does not equate to causality, so a single test with or without the -V flag isn't quite enough evidence to prove that -V makes iperf3 faster.

@Satboy
Copy link

Satboy commented Apr 10, 2015

im having the same problem. Using iperf to test a satelite conection which is supossed to give 4 Mb/s, when i tried to use iperf more than once it didnt show me the same bandwidth. i know it could be a enviromental troubble, but if its not the case what whould it be?

@joyceliuExaai
Copy link

Hi Experts,

I am not sure if there is any improve or not. Now I hit the same isssue.
My system:
centos 7.4
Infiniband 100Gbps.

I used iperf -P 8.

the output is different for every iteration, is there any idea to fix it? looking forward to hearing from you!

thanks,
Jocye

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants