Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About performance of p99 latency in 200 concurrencies. #859

Closed
kimmking opened this issue Feb 27, 2019 · 18 comments
Closed

About performance of p99 latency in 200 concurrencies. #859

kimmking opened this issue Feb 27, 2019 · 18 comments

Comments

@kimmking
Copy link

kimmking commented Feb 27, 2019

@spencergibb hi, spencer, can you retry your wrk test command with option --latency ?
I find 99percent latency is 2-3 times than the case of direct access with -c200.

ref: #301

@spencergibb
Copy link
Member

I will take a look, likely next week.

@peopleremote
Copy link

@spencergibb ,
Thanks a lot for looking into this. This issue does impact whether we can proceed spring cloud gateway as our production gateway. Our system is a latency sensitive trading system which has about 4000 requests per second. We need to make sure that most of our users' order place full cycle is less than 100ms. Would be much appreciated if you can help to confirm our above testing is real spring cloud gateway's problem or any parameter adjustment can help to improve the performance testing. Thanks again!

@raviraipuria
Copy link

@spencergibb Did you get a chance to look at this issue ?

@spencergibb
Copy link
Member

No, I have not. I have plans to do it this week.

@peopleremote
Copy link

@spencergibb
May us know the progress of this issue? Thanks in advance.

@goudai
Copy link

goudai commented Mar 18, 2019

@kimmking @spencergibb Hi !
I used the latest gateway (2.1.0), undertow as the backend server, got the results, it looks normal, and there is not much loss.

Here is the project for testing

My hardware

  • MacBook Pro (Retina, 15-inch, Mid 2015)
  • CPU 2.5 GHz Intel Core i7
  • MEM 16G

benchmark with wrk

Test response with 100 characters

# direct to app
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=128" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    56.42ms    2.89ms  81.30ms   66.45%
    Req/Sec   213.08     36.63   242.00     82.49%
  Latency Distribution
     50%   56.42ms
     75%   58.35ms
     90%   60.10ms
     99%   63.05ms
  102028 requests in 30.10s, 26.17MB read
Requests/sec:   3389.50
Transfer/sec:      0.87MB

# gateway 
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=128" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    61.90ms    6.45ms 109.22ms   71.10%
    Req/Sec   194.38     28.76   242.00     68.03%
  Latency Distribution
     50%   60.47ms
     75%   65.56ms
     90%   70.90ms
     99%   80.81ms
  93040 requests in 30.10s, 21.74MB read
  Socket errors: connect 0, read 59, write 3, timeout 0
Requests/sec:   3090.91
Transfer/sec:    739.52KB

Test response with 10240 characters

# direct to app
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=10240" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    58.85ms    3.40ms  74.01ms   67.73%
    Req/Sec   204.10     42.25   242.00     78.71%
  Latency Distribution
     50%   58.75ms
     75%   61.26ms
     90%   63.17ms
     99%   66.95ms
  97722 requests in 30.09s, 0.94GB read
Requests/sec:   3247.59
Transfer/sec:     32.16MB


# gateway
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=10240" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    67.86ms   13.67ms 187.01ms   83.08%
    Req/Sec   177.29     30.20   242.00     68.16%
  Latency Distribution
     50%   63.91ms
     75%   72.70ms
     90%   85.05ms
     99%  118.82ms
  84882 requests in 30.06s, 838.56MB read
  Socket errors: connect 0, read 33, write 7, timeout 0
Requests/sec:   2824.05
Transfer/sec:     27.90MB

Test response with 40960 characters

# direct to app
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=40960" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    64.54ms    6.02ms  91.79ms   64.74%
    Req/Sec   185.97     46.01   245.00     51.01%
  Latency Distribution
     50%   65.35ms
     75%   68.80ms
     90%   71.61ms
     99%   78.36ms
  88990 requests in 30.07s, 3.41GB read
Requests/sec:   2958.94
Transfer/sec:    115.99MB



# gateway 
$ wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=40960" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    77.63ms   15.66ms 298.06ms   70.87%
    Req/Sec   155.60     32.37   242.00     65.12%
  Latency Distribution
     50%   77.54ms
     75%   86.77ms
     90%   95.10ms
     99%  117.32ms
  74348 requests in 30.10s, 2.84GB read
  Socket errors: connect 0, read 72, write 5, timeout 0
Requests/sec:   2470.07
Transfer/sec:     96.77MB

@dave-fl
Copy link

dave-fl commented Mar 18, 2019

@spencergibb @violetagg It seems that the gateway will leak with the example from @goudai

@nicholes-lyt
Copy link

@goudai Hi,Through many tests, the delay of spring-cloud-gateway is still unstable.

use test project: https://github.com/goudai/gateway-performance-test

My hardware

MacBook Pro (Retina, 15-inch, Mid 2015)
CPU 2.2 GHz Intel Core i7
MEM 16G
benchmark with wrk

Test response with 100 characters

#1 direct to app
wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=128" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    54.59ms    1.59ms  78.60ms   71.85%
    Req/Sec   220.13     27.08   242.00     84.50%
  Latency Distribution
     50%   54.70ms
     75%   55.56ms
     90%   56.43ms
     99%   57.99ms
  105359 requests in 30.10s, 27.03MB read
  Socket errors: connect 0, read 37, write 0, timeout 0
Requests/sec:   3500.33
Transfer/sec:      0.90MB

#direct to app 
wrk -t16 -c200 -d5m "http://127.0.0.1:8081/demo?delay=50&length=128" --latency
Running 5m test @ http://127.0.0.1:8081/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    54.68ms    1.60ms  98.71ms   74.49%
    Req/Sec   220.05     25.81   247.00     84.74%
  Latency Distribution
     50%   54.80ms
     75%   55.61ms
     90%   56.39ms
     99%   58.11ms
  1052161 requests in 5.00m, 269.92MB read
  Socket errors: connect 0, read 38, write 0, timeout 0
Requests/sec:   3506.12
Transfer/sec:      0.90MB
#gateway
wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=128" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    74.59ms   90.22ms   1.38s    97.58%
    Req/Sec   190.68     39.24   242.00     72.58%
  Latency Distribution
     50%   58.35ms
     75%   63.88ms
     90%   78.12ms
     99%  606.97ms
  89132 requests in 30.10s, 20.83MB read
  Socket errors: connect 0, read 143, write 0, timeout 0
Requests/sec:   2961.57
Transfer/sec:    708.58KB

#gateway
wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=128" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    58.11ms    4.06ms 114.47ms   79.32%
    Req/Sec   207.03     24.73   242.00     66.33%
  Latency Distribution
     50%   57.14ms
     75%   59.73ms
     90%   63.13ms
     99%   72.11ms
  99198 requests in 30.10s, 23.18MB read
  Socket errors: connect 0, read 58, write 0, timeout 0
Requests/sec:   3295.34
Transfer/sec:    788.45KB

#gateway
wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=128" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    59.35ms   12.50ms 537.66ms   96.51%
    Req/Sec   203.71     28.72   242.00     77.11%
  Latency Distribution
     50%   57.19ms
     75%   60.04ms
     90%   64.25ms
     99%   95.41ms
  973145 requests in 5.00m, 227.38MB read
  Socket errors: connect 0, read 59, write 0, timeout 70
Requests/sec:   3242.90
Transfer/sec:    775.89KB

#gateway
wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=128" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=128
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    58.74ms   12.98ms   1.76s    97.28%
    Req/Sec   204.13     29.73   242.00     77.03%
  Latency Distribution
     50%   57.12ms
     75%   59.90ms
     90%   63.79ms
     99%   84.39ms
  974762 requests in 5.00m, 227.75MB read
  Socket errors: connect 0, read 43, write 0, timeout 156
Requests/sec:   3248.28
Transfer/sec:    777.18KB

Test response with 10240 characters

# direct to app 
wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=10240" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    55.31ms    2.05ms 104.88ms   83.04%
    Req/Sec   217.39     25.49   242.00     83.81%
  Latency Distribution
     50%   55.15ms
     75%   56.07ms
     90%   57.17ms
     99%   60.46ms
  104115 requests in 30.08s, 1.01GB read
  Socket errors: connect 0, read 38, write 0, timeout 0
Requests/sec:   3460.73
Transfer/sec:     34.27MB

# direct to app
wrk -t16 -c200 -d5m "http://127.0.0.1:8081/demo?delay=50&length=10240" --latency
Running 5m test @ http://127.0.0.1:8081/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    55.14ms    1.91ms 119.59ms   77.50%
    Req/Sec   218.22     26.11   242.00     83.60%
  Latency Distribution
     50%   55.10ms
     75%   56.05ms
     90%   57.12ms
     99%   59.91ms
  1043460 requests in 5.00m, 10.09GB read
  Socket errors: connect 0, read 45, write 0, timeout 0
Requests/sec:   3477.11
Transfer/sec:     34.43MB

# gateway 
wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=10240" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    99.65ms  109.23ms   1.99s    93.48%
    Req/Sec   140.79     58.62   242.00     63.66%
  Latency Distribution
     50%   66.85ms
     75%   95.96ms
     90%  153.18ms
     99%  561.19ms
  66080 requests in 30.10s, 652.81MB read
  Socket errors: connect 0, read 121, write 0, timeout 85
Requests/sec:   2195.41
Transfer/sec:     21.69MB

# gateway
wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=10240" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    69.09ms   27.97ms 545.24ms   92.72%
    Req/Sec   178.36     43.57   242.00     71.72%
  Latency Distribution
     50%   60.37ms
     75%   70.52ms
     90%   88.61ms
     99%  196.79ms
  85374 requests in 30.10s, 843.42MB read
  Socket errors: connect 0, read 92, write 0, timeout 0
Requests/sec:   2836.10
Transfer/sec:     28.02MB

# gateway
wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=10240" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    72.68ms   49.23ms   1.09s    92.74%
    Req/Sec   176.98     56.03   242.00     76.59%
  Latency Distribution
     50%   58.62ms
     75%   64.59ms
     90%   99.53ms
     99%  303.34ms
  823758 requests in 6.21m, 7.95GB read
  Socket errors: connect 0, read 107, write 0, timeout 576
Requests/sec:   2211.20
Transfer/sec:     21.84MB

# gateway
wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=10240" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=10240
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    62.00ms   33.30ms 964.33ms   98.08%
    Req/Sec   201.18     31.18   242.00     80.03%
  Latency Distribution
     50%   57.53ms
     75%   60.72ms
     90%   66.01ms
     99%  136.52ms
  958755 requests in 5.00m, 9.25GB read
  Socket errors: connect 0, read 31, write 0, timeout 0
Requests/sec:   3194.86
Transfer/sec:     31.56MB

Test response with 40960 characters

# direct to app
wrk -t16 -c200 -d30s "http://127.0.0.1:8081/demo?delay=50&length=40960" --latency
Running 30s test @ http://127.0.0.1:8081/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    57.85ms    3.53ms  91.82ms   74.98%
    Req/Sec   207.57     34.03   242.00     77.41%
  Latency Distribution
     50%   57.32ms
     75%   59.85ms
     90%   62.23ms
     99%   67.67ms
  99419 requests in 30.11s, 3.81GB read
  Socket errors: connect 0, read 39, write 0, timeout 0
Requests/sec:   3302.33
Transfer/sec:    129.45MB

wrk -t16 -c200 -d5m "http://127.0.0.1:8081/demo?delay=50&length=40960" --latency
Running 5m test @ http://127.0.0.1:8081/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    58.37ms    4.10ms 179.13ms   73.23%
    Req/Sec   205.84     36.87   242.00     81.58%
  Latency Distribution
     50%   57.67ms
     75%   60.81ms
     90%   63.60ms
     99%   69.02ms
  983823 requests in 5.00m, 37.66GB read
  Socket errors: connect 0, read 48, write 0, timeout 0
Requests/sec:   3278.49
Transfer/sec:    128.51MB

# gateway
wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=40960" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    89.95ms   40.98ms 448.80ms   87.95%
    Req/Sec   136.97     47.68   242.00     69.39%
  Latency Distribution
     50%   75.90ms
     75%  101.52ms
     90%  139.18ms
     99%  249.04ms
  65195 requests in 30.05s, 2.49GB read
  Socket errors: connect 0, read 111, write 0, timeout 0
Requests/sec:   2169.79
Transfer/sec:     85.00MB 

wrk -t16 -c200 -d30s "http://127.0.0.1:8082/proxy/demo?delay=50&length=40960" --latency
Running 30s test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    68.18ms   23.31ms 410.55ms   93.27%
    Req/Sec   179.50     41.13   242.00     71.75%
  Latency Distribution
     50%   61.88ms
     75%   69.95ms
     90%   82.59ms
     99%  180.85ms
  85936 requests in 30.10s, 3.29GB read
  Socket errors: connect 0, read 40, write 0, timeout 0
Requests/sec:   2854.96
Transfer/sec:    111.85MB

wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=40960" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    67.48ms   20.86ms 678.85ms   93.38%
    Req/Sec   180.58     37.92   242.00     65.77%
  Latency Distribution
     50%   62.44ms
     75%   70.37ms
     90%   81.59ms
     99%  145.01ms
  862514 requests in 5.00m, 33.00GB read
  Socket errors: connect 0, read 159, write 0, timeout 0
Requests/sec:   2874.14
Transfer/sec:    112.60MB

wrk -t16 -c200 -d5m "http://127.0.0.1:8082/proxy/demo?delay=50&length=40960" --latency
Running 5m test @ http://127.0.0.1:8082/proxy/demo?delay=50&length=40960
  16 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    72.50ms   34.44ms   1.03s    94.48%
    Req/Sec   171.33     38.57   242.00     69.31%
  Latency Distribution
     50%   64.77ms
     75%   74.72ms
     90%   90.74ms
     99%  198.72ms
  815188 requests in 5.00m, 31.19GB read
  Socket errors: connect 0, read 170, write 0, timeout 0
Requests/sec:   2716.46
Transfer/sec:    106.42MB

@spencergibb
Copy link
Member

There's a fundamental flaw in proxy benchmarks where the driver, proxy and downstream service all run on the same machine competing for the same resources.

Posting different results for the same project means something is different with the environment.

@nicholes-lyt
Copy link

There's a fundamental flaw in proxy benchmarks where the driver, proxy and downstream service all run on the same machine competing for the same resources.

Posting different results for the same project means something is different with the environment.

Thanks,We have done a spring-cloud-gateway pressure test based on a specific environment, but it still shows that the 99percent latency is not stable.

@dave-fl
Copy link

dave-fl commented Mar 19, 2019

I wanted to help test this - but my environment (all pieces on different servers) is adding about 8-14ms in a best case scenario (no proxy). Actually runs faster locally. So I'm not sure how useful the test will be.

@spencergibb
Copy link
Member

@dave-fl don't worry about a few ms. The point would be the claim that the 99th percentile is an order of magnitude worse.

@dave-fl
Copy link

dave-fl commented Mar 19, 2019

I am using a different back end to service but it seems to be stable. I get slightly better numbers locally.

I have wrk on container 1, gateway on container 2, backend on container 3. Normally for my tests I use https but I don't want to complicate things here so I am just doing pure http on the clouds network. I am including multiple gateway tests, you can see how the numbers fluctuate (jvm warm up, load, etc)

# Direct
Running 30s test @ http://directip/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    52.73ms    4.08ms 152.35ms   94.47%
	Req/Sec   475.23     45.37   505.00     89.00%
  Latency Distribution
	 50%   51.94ms
	 75%   53.31ms
	 90%   55.31ms
	 99%   61.10ms
  113636 requests in 30.06s, 18.21MB read
Requests/sec:   3780.28
Transfer/sec:    620.20KB

# All through gateway
Running 30s test @ gateway/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    72.50ms   18.15ms 227.94ms   79.77%
	Req/Sec   346.48     67.76   500.00     70.54%
  Latency Distribution
	 50%   67.26ms
	 75%   81.03ms
	 90%   96.15ms
	 99%  141.57ms
  82857 requests in 30.08s, 13.28MB read
Requests/sec:   2754.67
Transfer/sec:    451.94KB

Running 30s test @ gateway/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    70.47ms   16.61ms 213.06ms   79.87%
	Req/Sec   356.24     64.79   505.00     65.75%
  Latency Distribution
	 50%   65.42ms
	 75%   77.94ms
	 90%   93.52ms
	 99%  127.38ms
  85197 requests in 30.05s, 13.65MB read
Requests/sec:   2835.26
Transfer/sec:    465.16KB

Running 30s test @ gateway/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    81.67ms   26.08ms 282.29ms   80.32%
	Req/Sec   308.25     71.79   474.00     71.25%
  Latency Distribution
	 50%   74.42ms
	 75%   92.59ms
	 90%  114.95ms
	 99%  174.96ms
  73715 requests in 30.05s, 11.81MB read
Requests/sec:   2453.22
Transfer/sec:    402.48KB

Running 30s test @ gateway/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    68.15ms   13.94ms 181.12ms   76.97%
	Req/Sec   368.17     63.34   505.00     66.04%
  Latency Distribution
	 50%   64.01ms
	 75%   74.99ms
	 90%   88.07ms
	 99%  109.96ms
  88047 requests in 30.07s, 14.11MB read
Requests/sec:   2927.76
Transfer/sec:    480.34KB

Running 30s test @ gateway/demo?length=128&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    67.97ms   13.38ms 160.91ms   75.23%
	Req/Sec   368.85     62.64   500.00     65.00%
  Latency Distribution
	 50%   64.18ms
	 75%   74.67ms
	 90%   87.58ms
	 99%  107.20ms
  88201 requests in 30.05s, 14.13MB read
Requests/sec:   2935.16
Transfer/sec:    481.55KB

@dave-fl

This comment has been minimized.

@spencergibb
Copy link
Member

I'm going to go ahead and close this.

@dave-fl
Copy link

dave-fl commented Mar 19, 2019

# Direct larger string
Running 30s test @ direct/demo?length=10240&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    60.99ms    4.86ms  93.78ms   71.95%
	Req/Sec   409.85     47.09   505.00     67.88%
  Latency Distribution
	 50%   60.26ms
	 75%   63.51ms
	 90%   67.15ms
	 99%   76.50ms
  98011 requests in 30.06s, 0.94GB read
Requests/sec:   3260.64
Transfer/sec:     31.97MB

# Gateway larger string
Running 30s test @ gateway/demo?length=10240&delay=50
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
	Latency    91.31ms   16.21ms 233.84ms   78.56%
	Req/Sec   274.61     44.30   414.00     70.58%
  Latency Distribution
	 50%   87.09ms
	 75%   98.25ms
	 90%  112.26ms
	 99%  149.38ms
  65672 requests in 30.05s, 643.96MB read
Requests/sec:   2185.11
Transfer/sec:     21.43MB

@dave-fl
Copy link

dave-fl commented Mar 21, 2019

@spencergibb My testing could be flawed - not enough load. I re-tested with Reactor Netty and Webflux and when the connections increased the distribution of the latencies started to show their colors. More info here.

reactor/reactor-netty#654

@kimmking
Copy link
Author

kimmking commented May 5, 2019

@spencergibb Project reactor-netty has a huge improvement in branch, and looks like ship a new release soon.and then I am look forwarding a new version of gateway.
:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants