Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latency Increase as Connections Increase #654

Closed
dave-fl opened this issue Mar 21, 2019 · 16 comments
Closed

Latency Increase as Connections Increase #654

dave-fl opened this issue Mar 21, 2019 · 16 comments
Labels
area/perf This belongs to the performance theme or subset
Milestone

Comments

@dave-fl
Copy link

dave-fl commented Mar 21, 2019

I am seeing that latency at the 75%, 90% and 99% is increasing compared to some other Netty based frameworks - total throughput here about 30% less. This seems to be directly related to an increase in load. As connections increase - the average latency doubles relative to Vert.x.

My setup is not perfect - better hardware would reveal more. But I believe it is enough to demonstrate.

Container 1 is a Docker container that has wrk.
Container 2 is one of Vert.X, Reactory Netty or Webflux

All running on PCF - no HTTPS.

Each app accepts a parameter that specifies how many characters to return. Once this value is computed it is cached in a concurrent map so subsequent retrieval times are constant. I am using a 10kb string for GETs only.

Below are the results. The repot with these back ends can be found here

Also, I don't known why but the results for Webflux aren't even in the same league. About 20%-30% the throughput. Maybe @rstoyanchev can comment here.

Do you consider these results to be as expected or is there room for tuning here?

# Vert-X 200 Connections

Running 1m test @ vertx/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    15.46ms    5.51ms  61.43ms   82.83%
	Req/Sec   779.30    219.36     3.37k    69.11%
  Latency Distribution
	 50%   13.31ms
	 75%   14.51ms
	 90%   27.11ms
	 99%   30.10ms
  745196 requests in 1.00m, 7.14GB read
Requests/sec:  12400.12
Transfer/sec:    121.59MB

Running 1m test @ vertx/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    16.45ms    6.49ms  55.82ms   75.52%
	Req/Sec   732.69    240.14     1.40k    54.57%
  Latency Distribution
	 50%   13.27ms
	 75%   17.66ms
	 90%   27.78ms
	 99%   31.21ms
  700642 requests in 1.00m, 6.71GB read
Requests/sec:  11664.57
Transfer/sec:    114.38MB

# Netty Reactor 200 Connections

Running 1m test @ reactor/demo/10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    16.99ms    3.65ms 356.03ms   98.16%
	Req/Sec   711.36     37.49     2.17k    88.89%
  Latency Distribution
	 50%   16.90ms
	 75%   17.40ms
	 90%   18.02ms
	 99%   20.84ms
  680314 requests in 1.00m, 6.54GB read
Requests/sec:  11320.15
Transfer/sec:    111.48MB

Running 1m test @ reactor/demo/10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    17.07ms    1.83ms 346.45ms   91.88%
	Req/Sec   706.00     66.29     6.55k    97.03%
  Latency Distribution
	 50%   17.00ms
	 75%   17.55ms
	 90%   18.32ms
	 99%   20.63ms
  675050 requests in 1.00m, 6.49GB read
Requests/sec:  11233.20
Transfer/sec:    110.62MB

# Webflux 200 Connections

Running 1m test @ webflux/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    61.95ms   51.87ms 501.79ms   85.85%
	Req/Sec   250.85     97.85     0.87k    68.62%
  Latency Distribution
	 50%   39.86ms
	 75%   61.63ms
	 90%  152.86ms
	 99%  226.46ms
  223137 requests in 1.00m, 2.15GB read
Requests/sec:   3713.93
Transfer/sec:     36.56MB

Running 1m test @ weflux/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    60.86ms   50.89ms 315.27ms   85.82%
	Req/Sec   254.25    102.20   444.00     68.41%
  Latency Distribution
	 50%   38.78ms
	 75%   60.42ms
	 90%  150.52ms
	 99%  221.71ms
  227260 requests in 1.00m, 2.18GB read
Requests/sec:   3783.95
Transfer/sec:     37.25MB

# Vert-X 400 Connections

Running 1m test @ vertx/demo?length=10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    48.74ms   76.99ms   1.70s    90.41%
	Req/Sec     0.89k   161.09     3.87k    81.73%
  Latency Distribution
	 50%   21.91ms
	 75%   24.84ms
	 90%  119.94ms
	 99%  357.55ms
  846378 requests in 1.00m, 8.11GB read
Requests/sec:  14084.93
Transfer/sec:    138.12MB


Running 1m test @ vertx:61010/demo?length=10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    50.82ms   82.42ms   1.73s    90.48%
	Req/Sec     0.87k   149.22     2.35k    82.03%
  Latency Distribution
	 50%   22.14ms
	 75%   25.16ms
	 90%  126.41ms
	 99%  368.84ms
  834282 requests in 1.00m, 7.99GB read
  Socket errors: connect 0, read 0, write 0, timeout 1
Requests/sec:  13881.81
Transfer/sec:    136.12MB

# Reactor 400 Connections

Running 1m test @ reactor/demo/10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    79.66ms  123.13ms   1.73s    85.99%
	Req/Sec   689.15    116.93     2.16k    69.70%
  Latency Distribution
	 50%   20.71ms
	 75%   62.60ms
	 90%  253.73ms
	 99%  578.47ms
  659097 requests in 1.00m, 6.34GB read
Requests/sec:  10968.17
Transfer/sec:    108.02MB

Running 1m test @ reactor/demo/10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    78.61ms  122.80ms   1.93s    86.25%
	Req/Sec   698.75    118.17     2.52k    68.08%
  Latency Distribution
	 50%   20.31ms
	 75%   62.59ms
	 90%  249.61ms
	 99%  574.99ms
  668324 requests in 1.00m, 6.43GB read
  Socket errors: connect 0, read 0, write 0, timeout 11
Requests/sec:  11120.90
Transfer/sec:    109.53MB

# Webflux 400 Connections

Running 1m test @ webflux/demo?length=10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency   141.42ms  107.12ms   1.84s    79.03%
	Req/Sec   201.46    116.89     2.38k    61.26%
  Latency Distribution
	 50%   90.64ms
	 75%  219.74ms
	 90%  287.57ms
	 99%  492.88ms
  184193 requests in 1.00m, 1.77GB read
  Socket errors: connect 0, read 0, write 0, timeout 1
Requests/sec:   3065.77
Transfer/sec:     30.19MB

Running 1m test @ webflux/demo?length=10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency   139.95ms  103.75ms   1.83s    77.53%
	Req/Sec   203.75    115.47     1.19k    60.02%
  Latency Distribution
	 50%   91.34ms
	 75%  213.19ms
	 90%  285.76ms
	 99%  463.20ms
  184498 requests in 1.00m, 1.77GB read
Requests/sec:   3071.29
Transfer/sec:     30.24MB
@smaldini
Copy link
Contributor

These results are aligned with other reports and are considered issues at least on our part. We also have slightly different default setup on the tcp and thread side that could affect it. Thanks for the repos that will help us identitify the gap and see with its related to the same causes of the existing issues.

@violetagg
Copy link
Member

violetagg commented Mar 21, 2019

@dave-fl is this wrk or wrk2?
Are Vert.x and Reactor Netty using epoll as transport?

@violetagg
Copy link
Member

@dave-fl Is it possible it is something in the setup? I have 2 VMs, on the first I have wrk2 on the second your app and then I receive the following:

Reactor Netty

$ wrk -t 16 -c 400 -d 30s -R 20000 http://<IP>:8080/demo/10240
Running 30s test @ http://<IP>:8080/demo/10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.80s     2.55s   15.14s    58.25%
    Req/Sec   698.56      0.70   700.00    100.00%
  315342 requests in 30.00s, 3.03GB read
  Socket errors: connect 0, read 0, write 0, timeout 45
Requests/sec:  10511.34
Transfer/sec:    103.57MB

Vert.x

$ wrk -t 16 -c 400 -d 30s -R 20000 http://<IP>:8080/demo?length=10240
Running 30s test @ http://<IP>:8080/demo?length=10240
  16 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.02s     2.63s   15.03s    62.47%
    Req/Sec   707.66     11.75   730.00     75.00%
  330808 requests in 30.00s, 3.17GB read
Requests/sec:  11028.29
Transfer/sec:    108.20MB

@rstoyanchev
Copy link
Contributor

@dave-fl, a couple of comments to start. The description of wrk2 explains deficiencies with using wrk for measuring latency especially in the high percentile. To my knowledge Gatling has those addressed. Also for WebFlux a more fair comparison would be to functional endpoints, vs the annotated programming model which does more and is expected to have some overhead. Not saying this as conclusion, just something to consider. It would be good to see numbers with wrk2 or Gatling and WebFlux.fn.

@dave-fl
Copy link
Author

dave-fl commented Mar 22, 2019

Ok so I setup two c5d.2xlarge instances on EC2. The latency numbers are all very close. I believe the limit here was the networking link 10gbe as it appears to have been saturated.

So why is this happening? It seems that many who are testing this could be bound by CPU. Top was showing CPU to be around 340 for Vert.X and 540 for Reactor. Webflux was 795. I tried both default and epoll. Did not see a difference. Webflux is still the laggard.

# Vert.X

Running 1m test @ http://10.80.4.16:8080/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     1.75ms    4.85ms 208.57ms   99.82%
	Req/Sec     7.62k   407.52    10.97k    79.08%
  Latency Distribution
	 50%    1.49ms
	 75%    1.76ms
	 90%    2.07ms
	 99%    2.62ms
  7292273 requests in 1.00m, 69.83GB read
Requests/sec: 121337.08
Transfer/sec:      1.16GB

Running 1m test @ http://10.80.4.16:8080/demo?length=10240
  32 threads and 500 connections


  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     4.07ms    4.40ms 217.41ms   99.83%
	Req/Sec     3.82k   328.63    11.09k    89.52%
  Latency Distribution
	 50%    4.14ms
	 75%    4.56ms
	 90%    4.82ms
	 99%    5.53ms
  7297259 requests in 1.00m, 69.88GB read
Requests/sec: 121418.89
Transfer/sec:      1.16GB

# Reactor Netty

Running 1m test @ http://10.80.4.16:8080/demo/10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     1.79ms    4.51ms 209.29ms   99.52%
	Req/Sec     7.48k   405.15    10.15k    78.01%
  Latency Distribution
	 50%    1.54ms
	 75%    1.82ms
	 90%    2.12ms
	 99%    4.66ms
  7154689 requests in 1.00m, 68.81GB read
Requests/sec: 119046.90
Transfer/sec:      1.14GB

Running 1m test @ http://10.80.4.16:8080/demo/10240
  32 threads and 500 connections


  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     4.14ms    3.96ms 226.30ms   98.26%
	Req/Sec     3.75k   299.20     8.03k    73.23%
  Latency Distribution
	 50%    3.97ms
	 75%    4.81ms
	 90%    5.79ms
	 99%    9.09ms
  7163387 requests in 1.00m, 68.89GB read
Requests/sec: 119191.73
Transfer/sec:      1.15GB

# Webflux

Running 1m test @ http://10.80.4.16:8080/demo?length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     2.50ms    4.06ms 209.26ms   99.64%
	Req/Sec     5.08k   176.35     6.81k    83.10%
  Latency Distribution
	 50%    2.39ms
	 75%    2.47ms
	 90%    2.77ms
	 99%    3.14ms
  4852480 requests in 1.00m, 46.65GB read
Requests/sec:  80740.71
Transfer/sec:    794.80MB

Running 1m test @ http://10.80.4.16:8080/demo?length=10240
  32 threads and 500 connections

  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     6.07ms    3.30ms 216.30ms   98.95%
	Req/Sec     2.51k   147.93     7.30k    93.95%
  Latency Distribution
	 50%    6.03ms
	 75%    6.20ms
	 90%    7.28ms
	 99%    9.28ms
  4794380 requests in 1.00m, 46.09GB read
Requests/sec:  79773.23
Transfer/sec:    785.27MB

@dave-fl
Copy link
Author

dave-fl commented Mar 22, 2019

Alright last run. Using wrk2. Also optimized the Webflux Functions to use buffers and ran on a bigger machine (16 cores, so I wouldn't top out the CPU, I can saturate the NIC now even with Webflux). You see more CPU usage with Reactor + Larger Deviation.

# Webflux Function with Buffer (CPU 700, CPU was 900 with String)
Running 1m test @ http://10.80.4.25:8080/demo?length=10240
  8 threads and 500 connections
  Thread calibration: mean lat.: 7.251ms, rate sampling interval: 41ms
  Thread calibration: mean lat.: 7.873ms, rate sampling interval: 43ms
  Thread calibration: mean lat.: 5.186ms, rate sampling interval: 31ms
  Thread calibration: mean lat.: 5.818ms, rate sampling interval: 38ms
  Thread calibration: mean lat.: 5.803ms, rate sampling interval: 32ms
  Thread calibration: mean lat.: 5.918ms, rate sampling interval: 40ms
  Thread calibration: mean lat.: 4.309ms, rate sampling interval: 23ms
  Thread calibration: mean lat.: 5.860ms, rate sampling interval: 38ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    11.41ms   24.94ms 318.46ms   88.47%
	Req/Sec    15.23k   343.21    16.76k    78.34%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.66ms
 75.000%    4.41ms
 90.000%   41.41ms
 99.000%  114.94ms
 99.900%  225.28ms
 99.990%  278.53ms
 99.999%  314.62ms
100.000%  318.72ms

  Detailed Percentile spectrum:
	   Value   Percentile   TotalCount 1/(1-Percentile)

	   0.198     0.000000            1         1.00
	   0.818     0.100000       597098         1.11
	   1.051     0.200000      1194325         1.25
	   1.248     0.300000      1788860         1.43
	   1.444     0.400000      2386244         1.67
	   1.665     0.500000      2980276         2.00
	   1.797     0.550000      3279647         2.22
	   1.958     0.600000      3576236         2.50
	   2.187     0.650000      3875522         2.86
	   2.839     0.700000      4172415         3.33
	   4.415     0.750000      4470339         4.00
	   6.147     0.775000      4619365         4.44
	   9.783     0.800000      4768315         5.00
	  15.951     0.825000      4917334         5.71
	  24.287     0.850000      5066296         6.67
	  32.895     0.875000      5215393         8.00
	  37.343     0.887500      5290130         8.89
	  41.407     0.900000      5364806        10.00
	  45.151     0.912500      5439027        11.43
	  49.599     0.925000      5513565        13.33
	  56.255     0.937500      5587802        16.00
	  60.575     0.943750      5625008        17.78
	  65.471     0.950000      5662243        20.00
	  70.463     0.956250      5699960        22.86
	  76.095     0.962500      5736777        26.67
	  82.815     0.968750      5774116        32.00
	  86.079     0.971875      5792942        35.56
	  89.343     0.975000      5811269        40.00
	  92.863     0.978125      5830139        45.71
	  96.639     0.981250      5848588        53.33
	 101.567     0.984375      5867240        64.00
	 104.383     0.985938      5876512        71.11
	 107.711     0.987500      5885845        80.00
	 111.871     0.989062      5895115        91.43
	 116.991     0.990625      5904410       106.67
	 122.623     0.992188      5913717       128.00
	 125.887     0.992969      5918400       142.22
	 129.535     0.993750      5923065       160.00
	 133.631     0.994531      5927710       182.86
	 138.111     0.995313      5932333       213.33
	 143.359     0.996094      5937000       256.00
	 147.071     0.996484      5939323       284.44
	 152.063     0.996875      5941638       320.00
	 158.335     0.997266      5943999       365.71
	 165.119     0.997656      5946301       426.67
	 178.815     0.998047      5948620       512.00
	 185.983     0.998242      5949786       568.89
	 193.663     0.998437      5950946       640.00
	 207.359     0.998633      5952100       731.43
	 219.263     0.998828      5953264       853.33
	 226.047     0.999023      5954431      1024.00
	 230.399     0.999121      5955015      1137.78
	 235.647     0.999219      5955597      1280.00
	 239.999     0.999316      5956208      1462.86
	 242.687     0.999414      5956767      1706.67
	 245.375     0.999512      5957355      2048.00
	 247.039     0.999561      5957649      2275.56
	 248.703     0.999609      5957930      2560.00
	 250.879     0.999658      5958216      2925.71
	 254.847     0.999707      5958508      3413.33
	 263.679     0.999756      5958794      4096.00
	 268.031     0.999780      5958940      4551.11
	 270.079     0.999805      5959087      5120.00
	 272.127     0.999829      5959242      5851.43
	 274.431     0.999854      5959387      6826.67
	 276.991     0.999878      5959535      8192.00
	 278.015     0.999890      5959612      9102.22
	 278.783     0.999902      5959674     10240.00
	 280.575     0.999915      5959741     11702.86
	 282.111     0.999927      5959814     13653.33
	 283.391     0.999939      5959894     16384.00
	 283.903     0.999945      5959928     18204.44
	 286.463     0.999951      5959959     20480.00
	 289.791     0.999957      5959994     23405.71
	 294.655     0.999963      5960030     27306.67
	 302.335     0.999969      5960068     32768.00
	 304.127     0.999973      5960085     36408.89
	 306.687     0.999976      5960104     40960.00
	 309.247     0.999979      5960122     46811.43
	 310.527     0.999982      5960139     54613.33
	 312.319     0.999985      5960158     65536.00
	 313.343     0.999986      5960169     72817.78
	 313.855     0.999988      5960177     81920.00
	 314.367     0.999989      5960186     93622.86
	 314.879     0.999991      5960196    109226.67
	 315.391     0.999992      5960204    131072.00
	 315.903     0.999993      5960212    145635.56
	 315.903     0.999994      5960212    163840.00
	 316.415     0.999995      5960221    187245.71
	 316.415     0.999995      5960221    218453.33
	 316.927     0.999996      5960233    262144.00
	 316.927     0.999997      5960233    291271.11
	 316.927     0.999997      5960233    327680.00
	 316.927     0.999997      5960233    374491.43
	 317.183     0.999998      5960235    436906.67
	 317.439     0.999998      5960237    524288.00
	 317.695     0.999998      5960242    582542.22
	 317.695     0.999998      5960242    655360.00
	 317.695     0.999999      5960242    748982.86
	 317.695     0.999999      5960242    873813.33
	 317.951     0.999999      5960243   1048576.00
	 317.951     0.999999      5960243   1165084.44
	 318.207     0.999999      5960245   1310720.00
	 318.207     0.999999      5960245   1497965.71
	 318.207     0.999999      5960245   1747626.67
	 318.463     1.000000      5960247   2097152.00
	 318.463     1.000000      5960247   2330168.89
	 318.463     1.000000      5960247   2621440.00
	 318.463     1.000000      5960247   2995931.43
	 318.463     1.000000      5960247   3495253.33
	 318.463     1.000000      5960247   4194304.00
	 318.463     1.000000      5960247   4660337.78
	 318.463     1.000000      5960247   5242880.00
	 318.719     1.000000      5960248   5991862.86
	 318.719     1.000000      5960248          inf
#[Mean    =       11.414, StdDeviation   =       24.945]
#[Max     =      318.464, Total count    =      5960248]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  7178834 requests in 1.00m, 68.92GB read
Requests/sec: 119649.56
Transfer/sec:      1.15GB

# Reactor Netty - CPU 460

ubuntu@ip-10-80-4-205:~/repositories$ wrk2 -c500 -t8 -R120000 --latency 'http://10.80.4.25:8080/demo/10240' -d1m
Running 1m test @ http://10.80.4.25:8080/demo/10240
  8 threads and 500 connections
  Thread calibration: mean lat.: 36.430ms, rate sampling interval: 206ms
  Thread calibration: mean lat.: 31.648ms, rate sampling interval: 180ms
  Thread calibration: mean lat.: 27.338ms, rate sampling interval: 148ms
  Thread calibration: mean lat.: 45.181ms, rate sampling interval: 404ms
  Thread calibration: mean lat.: 33.287ms, rate sampling interval: 178ms
  Thread calibration: mean lat.: 27.757ms, rate sampling interval: 159ms
  Thread calibration: mean lat.: 32.678ms, rate sampling interval: 171ms
  Thread calibration: mean lat.: 29.127ms, rate sampling interval: 169ms

  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency   225.98ms  508.18ms   3.06s    88.15%
	Req/Sec    14.94k   112.22    15.39k    78.64%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    2.18ms
 75.000%  158.46ms
 90.000%  987.14ms
 99.000%    2.22s
 99.900%    2.56s
 99.990%    2.91s
 99.999%    3.05s
100.000%    3.06s

  Detailed Percentile spectrum:
	   Value   Percentile   TotalCount 1/(1-Percentile)

	   0.195     0.000000            2         1.00
	   1.124     0.100000       593830         1.11
	   1.381     0.200000      1186284         1.25
	   1.606     0.300000      1776889         1.43
	   1.852     0.400000      2369526         1.67
	   2.181     0.500000      2962956         2.00
	   2.753     0.550000      3257440         2.22
	   3.693     0.600000      3554004         2.50
	   6.295     0.650000      3849674         2.86
	  57.663     0.700000      4145807         3.33
	 158.463     0.750000      4441995         4.00
	 210.175     0.775000      4590202         4.44
	 256.255     0.800000      4738189         5.00
	 298.495     0.825000      4886240         5.71
	 354.815     0.850000      5034476         6.67
	 518.911     0.875000      5182202         8.00
	 889.855     0.887500      5256517         8.89
	 987.135     0.900000      5330436        10.00
	1133.567     0.912500      5404614        11.43
	1272.831     0.925000      5478774        13.33
	1382.399     0.937500      5553019        16.00
	1434.623     0.943750      5590016        17.78
	1499.135     0.950000      5626787        20.00
	1595.391     0.956250      5663633        22.86
	1741.823     0.962500      5700550        26.67
	1895.423     0.968750      5737458        32.00
	1950.719     0.971875      5755955        35.56
	1999.871     0.975000      5774554        40.00
	2042.879     0.978125      5793000        45.71
	2087.935     0.981250      5811748        53.33
	2134.015     0.984375      5830360        64.00
	2156.543     0.985938      5839830        71.11
	2181.119     0.987500      5848878        80.00
	2203.647     0.989062      5857834        91.43
	2232.319     0.990625      5867457       106.67
	2263.039     0.992188      5876564       128.00
	2279.423     0.992969      5881282       142.22
	2295.807     0.993750      5885694       160.00
	2316.287     0.994531      5890286       182.86
	2338.815     0.995313      5895022       213.33
	2367.487     0.996094      5899493       256.00
	2383.871     0.996484      5901921       284.44
	2398.207     0.996875      5904126       320.00
	2416.639     0.997266      5906454       365.71
	2443.263     0.997656      5908633       426.67
	2478.079     0.998047      5911006       512.00
	2494.463     0.998242      5912207       568.89
	2508.799     0.998437      5913384       640.00
	2519.039     0.998633      5914422       731.43
	2537.471     0.998828      5915574       853.33
	2562.047     0.999023      5916745      1024.00
	2572.287     0.999121      5917342      1137.78
	2590.719     0.999219      5917920      1280.00
	2648.063     0.999316      5918454      1462.86
	2707.455     0.999414      5919031      1706.67
	2754.559     0.999512      5919685      2048.00
	2762.751     0.999561      5919948      2275.56
	2770.943     0.999609      5920190      2560.00
	2787.327     0.999658      5920494      2925.71
	2799.615     0.999707      5920794      3413.33
	2809.855     0.999756      5921057      4096.00
	2820.095     0.999780      5921242      4551.11
	2830.335     0.999805      5921347      5120.00
	2846.719     0.999829      5921507      5851.43
	2856.959     0.999854      5921637      6826.67
	2875.391     0.999878      5921800      8192.00
	2881.535     0.999890      5921858      9102.22
	2916.351     0.999902      5921921     10240.00
	2971.647     0.999915      5921994     11702.86
	2988.031     0.999927      5922071     13653.33
	3008.511     0.999939      5922139     16384.00
	3016.703     0.999945      5922177     18204.44
	3028.991     0.999951      5922213     20480.00
	3033.087     0.999957      5922246     23405.71
	3037.183     0.999963      5922294     27306.67
	3039.231     0.999969      5922336     32768.00
	3041.279     0.999973      5922347     36408.89
	3045.375     0.999976      5922380     40960.00
	3045.375     0.999979      5922380     46811.43
	3047.423     0.999982      5922421     54613.33
	3047.423     0.999985      5922421     65536.00
	3047.423     0.999986      5922421     72817.78
	3049.471     0.999988      5922439     81920.00
	3049.471     0.999989      5922439     93622.86
	3051.519     0.999991      5922458    109226.67
	3051.519     0.999992      5922458    131072.00
	3053.567     0.999993      5922469    145635.56
	3053.567     0.999994      5922469    163840.00
	3053.567     0.999995      5922469    187245.71
	3057.663     0.999995      5922475    218453.33
	3059.711     0.999996      5922487    262144.00
	3059.711     0.999997      5922487    291271.11
	3059.711     0.999997      5922487    327680.00
	3059.711     0.999997      5922487    374491.43
	3059.711     0.999998      5922487    436906.67
	3061.759     0.999998      5922499    524288.00
	3061.759     1.000000      5922499          inf
#[Mean    =      225.985, StdDeviation   =      508.179]
#[Max     =     3059.712, Total count    =      5922499]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  7133738 requests in 1.00m, 68.61GB read
Requests/sec: 118898.49
Transfer/sec:      1.14GB

# Vert.X CPU - 260
ubuntu@ip-10-80-4-205:~/repositories$ wrk2 -c500 -t8 -R120000 --latency 'http://10.80.4.25:8080/demo?length=10240' -d1m
Running 1m test @ http://10.80.4.25:8080/demo?length=10240
  8 threads and 500 connections
  Thread calibration: mean lat.: 1.275ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.278ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.278ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.319ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.286ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.321ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.253ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.263ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     1.20ms    1.27ms 205.31ms   97.89%
	Req/Sec    15.86k     0.95k   20.89k    70.05%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.13ms
 75.000%    1.56ms
 90.000%    1.88ms
 99.000%    2.99ms
 99.900%    5.16ms
 99.990%   11.57ms
 99.999%  182.65ms
100.000%  205.44ms

  Detailed Percentile spectrum:
	   Value   Percentile   TotalCount 1/(1-Percentile)

	   0.138     0.000000            2         1.00
	   0.488     0.100000       597368         1.11
	   0.654     0.200000      1194280         1.25
	   0.812     0.300000      1791929         1.43
	   0.973     0.400000      2388458         1.67
	   1.134     0.500000      2982935         2.00
	   1.216     0.550000      3279926         2.22
	   1.300     0.600000      3579878         2.50
	   1.384     0.650000      3876346         2.86
	   1.470     0.700000      4176488         3.33
	   1.557     0.750000      4472663         4.00
	   1.602     0.775000      4623048         4.44
	   1.648     0.800000      4772092         5.00
	   1.697     0.825000      4922022         5.71
	   1.749     0.850000      5068387         6.67
	   1.809     0.875000      5217577         8.00
	   1.844     0.887500      5293866         8.89
	   1.881     0.900000      5366685        10.00
	   1.924     0.912500      5441210        11.43
	   1.974     0.925000      5515617        13.33
	   2.033     0.937500      5590921        16.00
	   2.067     0.943750      5628994        17.78
	   2.103     0.950000      5665053        20.00
	   2.147     0.956250      5702453        22.86
	   2.201     0.962500      5740061        26.67
	   2.271     0.968750      5776858        32.00
	   2.317     0.971875      5795430        35.56
	   2.373     0.975000      5813880        40.00
	   2.445     0.978125      5832660        45.71
	   2.535     0.981250      5851006        53.33
	   2.655     0.984375      5869644        64.00
	   2.729     0.985938      5878908        71.11
	   2.815     0.987500      5888198        80.00
	   2.919     0.989062      5897602        91.43
	   3.041     0.990625      5906959       106.67
	   3.187     0.992188      5916249       128.00
	   3.273     0.992969      5920840       142.22
	   3.375     0.993750      5925528       160.00
	   3.493     0.994531      5930152       182.86
	   3.627     0.995313      5934829       213.33
	   3.789     0.996094      5939483       256.00
	   3.879     0.996484      5941784       284.44
	   3.985     0.996875      5944139       320.00
	   4.111     0.997266      5946438       365.71
	   4.259     0.997656      5948799       426.67
	   4.443     0.998047      5951101       512.00
	   4.555     0.998242      5952280       568.89
	   4.683     0.998437      5953435       640.00
	   4.815     0.998633      5954590       731.43
	   4.983     0.998828      5955751       853.33
	   5.183     0.999023      5956913      1024.00
	   5.303     0.999121      5957502      1137.78
	   5.447     0.999219      5958086      1280.00
	   5.603     0.999316      5958657      1462.86
	   5.775     0.999414      5959245      1706.67
	   5.991     0.999512      5959826      2048.00
	   6.111     0.999561      5960119      2275.56
	   6.235     0.999609      5960407      2560.00
	   6.371     0.999658      5960701      2925.71
	   6.519     0.999707      5960992      3413.33
	   6.699     0.999756      5961281      4096.00
	   6.803     0.999780      5961429      4551.11
	   6.915     0.999805      5961567      5120.00
	   7.035     0.999829      5961714      5851.43
	   7.255     0.999854      5961860      6826.67
	   7.607     0.999878      5962004      8192.00
	   8.575     0.999890      5962076      9102.22
	  12.079     0.999902      5962150     10240.00
	  16.991     0.999915      5962222     11702.86
	  37.471     0.999927      5962295     13653.33
	  65.919     0.999939      5962368     16384.00
	  79.935     0.999945      5962404     18204.44
	  94.143     0.999951      5962440     20480.00
	 107.647     0.999957      5962477     23405.71
	 122.367     0.999963      5962513     27306.67
	 135.679     0.999969      5962550     32768.00
	 142.975     0.999973      5962568     36408.89
	 150.399     0.999976      5962587     40960.00
	 157.439     0.999979      5962604     46811.43
	 163.327     0.999982      5962622     54613.33
	 170.751     0.999985      5962641     65536.00
	 174.719     0.999986      5962650     72817.78
	 178.175     0.999988      5962659     81920.00
	 181.887     0.999989      5962668     93622.86
	 185.727     0.999991      5962678    109226.67
	 188.799     0.999992      5962687    131072.00
	 190.335     0.999993      5962691    145635.56
	 191.231     0.999994      5962695    163840.00
	 193.663     0.999995      5962700    187245.71
	 194.815     0.999995      5962704    218453.33
	 197.503     0.999996      5962709    262144.00
	 197.887     0.999997      5962711    291271.11
	 198.655     0.999997      5962713    327680.00
	 198.911     0.999997      5962716    374491.43
	 200.831     0.999998      5962718    436906.67
	 201.599     0.999998      5962721    524288.00
	 201.599     0.999998      5962721    582542.22
	 201.855     0.999998      5962723    655360.00
	 202.495     0.999999      5962724    748982.86
	 202.623     0.999999      5962725    873813.33
	 202.879     0.999999      5962726   1048576.00
	 202.879     0.999999      5962726   1165084.44
	 204.159     0.999999      5962728   1310720.00
	 204.159     0.999999      5962728   1497965.71
	 204.159     0.999999      5962728   1747626.67
	 204.543     1.000000      5962729   2097152.00
	 204.543     1.000000      5962729   2330168.89
	 204.543     1.000000      5962729   2621440.00
	 205.439     1.000000      5962731   2995931.43
	 205.439     1.000000      5962731          inf
#[Mean    =        1.196, StdDeviation   =        1.266]
#[Max     =      205.312, Total count    =      5962731]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  7181674 requests in 1.00m, 68.77GB read
Requests/sec: 119696.18
Transfer/sec:      1.15GB

@peopleremote
Copy link

@dave-fl
May us know the rough plan to improve this?

Thanks, Stephen

@smaldini
Copy link
Contributor

smaldini commented Apr 18, 2019

@dave-fl if you can try this branch https://github.com/reactor/reactor-netty/tree/MonoSend-experiment locally, we are polishing it but it showed great promise on my machine ?

@dave-fl
Copy link
Author

dave-fl commented Apr 28, 2019

@smaldini @violetagg Just tested this branch. Huge improvement. Performance is way up, stddev is very small. CPU usage is way down. About 270-300 during the run (previously was about 480).

When is this going into the main branch :)

ubuntu@ip-10-80-4-169:~$ wrk2 -c500 -t8 -R120000 --latency 'http://10.80.4.92:8080/demo/10240' -d1m
Running 1m test @ http://10.80.4.92:8080/demo/10240
  8 threads and 500 connections
  Thread calibration: mean lat.: 1.242ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.237ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.232ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.226ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.246ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.267ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.250ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.226ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency     1.31ms    1.21ms  26.22ms   96.80%
	Req/Sec    15.86k     0.98k   21.56k    72.33%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.18ms
 75.000%    1.61ms
 90.000%    1.98ms
 99.000%    4.26ms
 99.900%   18.22ms
 99.990%   23.95ms
 99.999%   25.41ms
100.000%   26.24ms

  Detailed Percentile spectrum:
	   Value   Percentile   TotalCount 1/(1-Percentile)

	   0.117     0.000000            1         1.00
	   0.498     0.100000       596767         1.11
	   0.676     0.200000      1193203         1.25
	   0.847     0.300000      1791887         1.43
	   1.016     0.400000      2386099         1.67
	   1.183     0.500000      2981610         2.00
	   1.267     0.550000      3279688         2.22
	   1.352     0.600000      3580851         2.50
	   1.436     0.650000      3876502         2.86
	   1.523     0.700000      4176846         3.33
	   1.611     0.750000      4473337         4.00
	   1.657     0.775000      4622023         4.44
	   1.706     0.800000      4771813         5.00
	   1.759     0.825000      4920348         5.71
	   1.819     0.850000      5069489         6.67
	   1.890     0.875000      5219102         8.00
	   1.930     0.887500      5292383         8.89
	   1.976     0.900000      5366360        10.00
	   2.030     0.912500      5442047        11.43
	   2.091     0.925000      5517052        13.33
	   2.163     0.937500      5590393        16.00
	   2.209     0.943750      5628470        17.78
	   2.263     0.950000      5665778        20.00
	   2.329     0.956250      5701910        22.86
	   2.419     0.962500      5739499        26.67
	   2.541     0.968750      5776670        32.00
	   2.619     0.971875      5794938        35.56
	   2.721     0.975000      5813635        40.00
	   2.859     0.978125      5832399        45.71
	   3.049     0.981250      5850869        53.33
	   3.297     0.984375      5869439        64.00
	   3.475     0.985938      5878765        71.11
	   3.713     0.987500      5888065        80.00
	   4.025     0.989062      5897389        91.43
	   4.455     0.990625      5906742       106.67
	   5.283     0.992188      5916014       128.00
	   5.963     0.992969      5920681       142.22
	   7.115     0.993750      5925342       160.00
	   8.503     0.994531      5930012       182.86
	   9.807     0.995313      5934659       213.33
	  11.023     0.996094      5939324       256.00
	  11.719     0.996484      5941654       284.44
	  12.583     0.996875      5943967       320.00
	  13.599     0.997266      5946291       365.71
	  14.583     0.997656      5948635       426.67
	  15.407     0.998047      5950967       512.00
	  15.775     0.998242      5952120       568.89
	  16.231     0.998437      5953295       640.00
	  16.735     0.998633      5954450       731.43
	  17.423     0.998828      5955615       853.33
	  18.367     0.999023      5956776      1024.00
	  19.055     0.999121      5957363      1137.78
	  19.855     0.999219      5957937      1280.00
	  20.479     0.999316      5958528      1462.86
	  21.087     0.999414      5959105      1706.67
	  21.583     0.999512      5959697      2048.00
	  21.839     0.999561      5959990      2275.56
	  22.079     0.999609      5960274      2560.00
	  22.319     0.999658      5960565      2925.71
	  22.559     0.999707      5960867      3413.33
	  22.879     0.999756      5961144      4096.00
	  23.023     0.999780      5961289      4551.11
	  23.215     0.999805      5961435      5120.00
	  23.375     0.999829      5961580      5851.43
	  23.567     0.999854      5961723      6826.67
	  23.759     0.999878      5961867      8192.00
	  23.871     0.999890      5961941      9102.22
	  23.983     0.999902      5962017     10240.00
	  24.095     0.999915      5962091     11702.86
	  24.223     0.999927      5962161     13653.33
	  24.367     0.999939      5962231     16384.00
	  24.447     0.999945      5962272     18204.44
	  24.527     0.999951      5962305     20480.00
	  24.607     0.999957      5962341     23405.71
	  24.719     0.999963      5962378     27306.67
	  24.847     0.999969      5962416     32768.00
	  24.911     0.999973      5962436     36408.89
	  24.975     0.999976      5962451     40960.00
	  25.055     0.999979      5962468     46811.43
	  25.167     0.999982      5962486     54613.33
	  25.279     0.999985      5962506     65536.00
	  25.327     0.999986      5962517     72817.78
	  25.343     0.999988      5962524     81920.00
	  25.391     0.999989      5962532     93622.86
	  25.455     0.999991      5962540    109226.67
	  25.519     0.999992      5962549    131072.00
	  25.567     0.999993      5962554    145635.56
	  25.615     0.999994      5962558    163840.00
	  25.695     0.999995      5962563    187245.71
	  25.775     0.999995      5962567    218453.33
	  25.823     0.999996      5962574    262144.00
	  25.823     0.999997      5962574    291271.11
	  25.839     0.999997      5962576    327680.00
	  25.871     0.999997      5962579    374491.43
	  25.919     0.999998      5962583    436906.67
	  25.919     0.999998      5962583    524288.00
	  25.951     0.999998      5962586    582542.22
	  25.951     0.999998      5962586    655360.00
	  25.983     0.999999      5962587    748982.86
	  25.999     0.999999      5962590    873813.33
	  25.999     0.999999      5962590   1048576.00
	  25.999     0.999999      5962590   1165084.44
	  25.999     0.999999      5962590   1310720.00
	  26.031     0.999999      5962592   1497965.71
	  26.031     0.999999      5962592   1747626.67
	  26.031     1.000000      5962592   2097152.00
	  26.031     1.000000      5962592   2330168.89
	  26.031     1.000000      5962592   2621440.00
	  26.079     1.000000      5962593   2995931.43
	  26.079     1.000000      5962593   3495253.33
	  26.079     1.000000      5962593   4194304.00
	  26.079     1.000000      5962593   4660337.78
	  26.079     1.000000      5962593   5242880.00
	  26.239     1.000000      5962594   5991862.86
	  26.239     1.000000      5962594          inf
#[Mean    =        1.309, StdDeviation   =        1.211]
#[Max     =       26.224, Total count    =      5962594]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  7181569 requests in 1.00m, 68.94GB read
Requests/sec: 119695.95
Transfer/sec:      1.15GB

@benitocm
Copy link

Hi,
Very informative the numbers.
Did you check how the behavior is if the number of connections are increased up to e.g. 50.000?

@dave-fl
Copy link
Author

dave-fl commented Apr 30, 2019

@benitocm No. I think the key is that CPU and standard deviation is down. When a new build is made, I can do some more tests for gateway too.

At that many connections, some of the native transport settings such as SO_REUSEPORT will likely become important.

@smaldini smaldini added this to the 0.8.7.RELEASE milestone May 1, 2019
@benitocm
Copy link

benitocm commented May 1, 2019

@benitocm No. I think the key is that CPU and standard deviation is down. When a new build is made, I can do some more tests for gateway too.

At that many connections, some of the native transport settings such as SO_REUSEPORT will likely become important.
@dave-fl Would it be possible to send you a private message for a couple of questions? I have done a quick test with wrk and a very simple webflux server (just returning 200 Ok) and I was not able to get good numbers so i would like to discard silly errors and/or environment errors before extracting any conclusion.

@dave-fl
Copy link
Author

dave-fl commented May 1, 2019

@benitocm

You can use my example repository. Just make sure to build with the referenced branch.

Also. Don’t run this on the same machine.

Take note of what I tested with. These were well performing machines on AWS - C5.4xlarge and C5.2xlarge.

I can’t comment on Webflux really, I didn’t retest it. The performance with Webflux Functions previously was ok, but did have higher CPU. Webflux without functions was bad. It was unable to saturate the link, maybe now it can.

If you have specific questions send them on gitter. But I suggest you establish a baseline and see how your other setups compare.

@kimmking
Copy link

kimmking commented May 5, 2019

Good news @dave-fl
And when is this going into the main branch and ship a new release? @smaldini
My gateway dev team is looking forward to it.
spring-cloud/spring-cloud-gateway#859

@smaldini
Copy link
Contributor

smaldini commented May 6, 2019

Core changes have been merged (#725) and scheduled for 0.8.7.RELEASE/0.9.0.M1 tomorrow.
We have more changes to apply that will likely reduce GC pressure further by using less allocations, and reducing contention on some paths (notably single send). We will open further ticket for those and if the current issue seems not fixed enough we will also reopen it.

@kimmking
Copy link

kimmking commented Jun 3, 2019

@dave-fl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/perf This belongs to the performance theme or subset
Projects
None yet
Development

No branches or pull requests

7 participants