Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement NodeJS based server replicating osrm-routed #6411

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

SiarheiFedartsou
Copy link
Member

@SiarheiFedartsou SiarheiFedartsou commented Oct 15, 2022

Issue

#3832

Some notes:

  • we are not removing existing osrm-routed yet, osrm-routed-js is separate package which depends on node-osrm
  • I chose https://www.fastify.io/ as framework
  • some tests are disabled, mainly they are related to error handling - we have certain differences between node-osrm and libosrm APIs used by osrm-routed, so that we will probably have to slightly break backward compatibility in order to make it working in the same way
  • there are tons of TODOs, but I propose to address them in iterative approach, i.e. merge this PR and then improve things in separate small PRs - will be easier to review
  • node-based server is a way slower when it comes to startup times, that's why I see certain slowdown in tests execution time. I was able to address it a bit via adding optimization for "datastore" to not kill osrm-routed after each test, but we probably will need parallelize tests or smth like this to make it more usable
  • we probably can add .js files to .gitignore, but I'd propose to do it later, because there are too many questions arise on how to properly do that
  • it is not clear yet how to package it on release and how to properly install it in Docker image
  • fastify does not support Node 12, but I would consider dropping its support too since there are no even security updates for it https://endoflife.date/nodejs

Tasklist

Requirements / Relations

Link any requirements here. Other pull requests this PR is based on?

@SiarheiFedartsou SiarheiFedartsou changed the title Implement NodeJS based server fully replicating osrm-routed Implement NodeJS based server replicating osrm-routed Oct 23, 2022
@@ -52,11 +52,11 @@ Feature: Weight tests
| abc |

When I route I should get
| waypoints | route | distances | weights | times | a:distance | a:duration | a:weight | a:speed |
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be caused by the fact that our JSON renderer truncates numbers to have maximum 10 digits after dot

size_t decimalpos = std::find(buffer.begin(), buffer.end(), '.') - buffer.begin();

# we do not run `osrm-routed-js` tests on v12 since fastify doesn't support it
NODE_VERSION=$(node --version)
NODE_MAJOR_VERSION=$(echo $NODE_VERSION | cut -d. -f1)
if [[ $NODE_MAJOR_VERSION != "v12" ]]; then
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to not run mmap tests here until we won't fully migrate to JS-based osrm-routed to save CI time.

@SiarheiFedartsou SiarheiFedartsou marked this pull request as ready for review October 23, 2022 15:01
@@ -0,0 +1,8 @@
"use strict";
Copy link
Member Author

@SiarheiFedartsou SiarheiFedartsou Oct 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for reviewers: all routed-js/*.js files are generated by TypeScript compiler, so there are no a lot of sense to review it.

@@ -40,7 +42,7 @@ module.exports = function () {

this.OSRM_PORT = process.env.OSRM_PORT && parseInt(process.env.OSRM_PORT) || 5000;
this.OSRM_IP = process.env.OSRM_IP || '127.0.0.1';
this.OSRM_CONNECTION_RETRIES = process.env.OSRM_CONNECTION_RETRIES && parseInt(process.env.OSRM_CONNECTION_RETRIES) || 10;
this.OSRM_CONNECTION_RETRIES = process.env.OSRM_CONNECTION_RETRIES && parseInt(process.env.OSRM_CONNECTION_RETRIES) || 100;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to change it since new server has higher startup time than our existing osrm-routed.

@mjjbell
Copy link
Member

mjjbell commented Oct 24, 2022

node-based server is a way slower when it comes to startup times, that's why I see certain slowdown in tests execution time. I was able to address it a bit via adding optimization for "datastore" to not kill osrm-routed after each test, but we probably will need parallelize tests or smth like this to make it more usable

Would be good to also have a comparison of request round-trip times, especially for requests with large numbers of coordinates.

@SiarheiFedartsou
Copy link
Member Author

node-based server is a way slower when it comes to startup times, that's why I see certain slowdown in tests execution time. I was able to address it a bit via adding optimization for "datastore" to not kill osrm-routed after each test, but we probably will need parallelize tests or smth like this to make it more usable

Would be good to also have a comparison of request round-trip times, especially for requests with large numbers of coordinates.

It turned out to be really good idea. I used Apache Benchmark to do that.
First finding is that keep-alive seems to be broken in osrm-routed(or ab wants something unexpected from it) and that's why each request was hanging for keep-alive timeout until I commented it out.

Results for osrm-routed calling in 1 thread(using simple request yet):

➜  ~ ab -n 5000 -c 1 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true" 
This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            5000

Document Path:          /route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true
Document Length:        5892 bytes

Concurrency Level:      1
Time taken for tests:   2.613 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      30915000 bytes
HTML transferred:       29460000 bytes
Requests per second:    1913.53 [#/sec] (mean)
Time per request:       0.523 [ms] (mean)
Time per request:       0.523 [ms] (mean, across all concurrent requests)
Transfer rate:          11554.03 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.6      0      28
Waiting:        0    0   0.6      0      28
Total:          0    1   0.6      0      29
WARNING: The median and mean for the total time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      1
  99%      1
 100%     29 (longest request)

Results for osrm-routed-js calling in 1 thread:

➜  ~ ab -n 5000 -c 1 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            5000

Document Path:          /route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true
Document Length:        5904 bytes

Concurrency Level:      1
Time taken for tests:   3.400 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      30240000 bytes
HTML transferred:       29520000 bytes
Requests per second:    1470.62 [#/sec] (mean)
Time per request:       0.680 [ms] (mean)
Time per request:       0.680 [ms] (mean, across all concurrent requests)
Transfer rate:          8685.84 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    1   0.5      1      21
Waiting:        0    1   0.5      1      21
Total:          1    1   0.5      1      21

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      1
  99%      1
 100%     21 (longest request)

So it is currently 1470.62 requests/sec for osrm-routed-js and 1913.53 requests/sec for osrm-routed what is not so bad taking into account that node bindings have certain overhead too.

But what I noticed is that NodeJS server doesn't "scale" if we do requests concurrently, so I probably use bindings in some wrong way - will investigate.

@SiarheiFedartsou
Copy link
Member Author

osrm-routed calling from 32 threads:

➜  ~ ab -n 5000 -c 32 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            5000

Document Path:          /route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true
Document Length:        5892 bytes

Concurrency Level:      32
Time taken for tests:   0.427 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      30915000 bytes
HTML transferred:       29460000 bytes
Requests per second:    11697.33 [#/sec] (mean)
Time per request:       2.736 [ms] (mean)
Time per request:       0.085 [ms] (mean, across all concurrent requests)
Transfer rate:          70629.48 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     1    3   0.8      2      12
Waiting:        1    2   0.7      2      11
Total:          1    3   0.8      3      12
WARNING: The median and mean for the processing time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      3
  66%      3
  75%      3
  80%      3
  90%      3
  95%      4
  98%      5
  99%      6
 100%     12 (longest request)

osrm-routed-js calling from 32 threads:

➜  ~ ab -n 5000 -c 32 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            5000

Document Path:          /route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true
Document Length:        5904 bytes

Concurrency Level:      32
Time taken for tests:   1.113 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      30240000 bytes
HTML transferred:       29520000 bytes
Requests per second:    4491.96 [#/sec] (mean)
Time per request:       7.124 [ms] (mean)
Time per request:       0.223 [ms] (mean, across all concurrent requests)
Transfer rate:          26530.61 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     2    7   1.9      7      29
Waiting:        1    5   1.8      5      26
Total:          2    7   1.9      7      29

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      7
  75%      7
  80%      8
  90%      8
  95%      9
  98%     10
  99%     12
 100%     29 (longest request)

i.e. 11697.33 req/sec vs 4491.96 req/sec.

@SiarheiFedartsou
Copy link
Member Author

I am seeing issues with running "large" requests with ab, will provide it later.

But in general @mjjbell do you think it is a problem if we will have certain slowdown here?

@SiarheiFedartsou
Copy link
Member Author

For large requests I used h2load https://nghttp2.org/documentation/h2load-howto.html with HTTP 1.1.
Here it is not so obvious 336.09 req/s for osrm-routed vs 404.66 req/s for osrm-routed-js.

osrm-routed

finished in 297.54s, 336.09 req/s, 48.69KB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 14.15MB (14836307) total, 23.27MB (24400000) headers (space savings 0.00%), 134.18MB (140700000) data
                     min         max         mean         sd        +/- sd
time for request:    10.34ms    207.39ms     39.95ms     17.49ms    71.11%
time for connect:       31us      2.74ms       222us       466us    96.88%
time to 1st byte:    10.94ms     84.89ms     27.46ms     17.19ms    84.38%
req/s           :      10.50       12.79       11.04        0.69    81.25%

osrm-routed-js

finished in 247.12s, 404.66 req/s, 624.38KB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 150.68MB (158000000) total, 12.78MB (13400000) headers (space savings 0.00%), 134.18MB (140700000) data
                     min         max         mean         sd        +/- sd
time for request:    33.72ms    128.50ms     79.06ms      2.05ms    89.65%
time for connect:      951us      1.39ms      1.18ms       108us    62.50%
time to 1st byte:    39.17ms     95.61ms     61.74ms     19.24ms    50.00%
req/s           :      12.65       12.65       12.65        0.00    59.38%

Benchmark:

h2load --h1 -n100000 -c32 -t8 "http://127.0.0.1:5000/table/v1/driving/10.90761,47.54966;11.93966,47.81606;11.08894,48.62424;11.62135,47.48357;11.5487,47.49729;11.45756,48.03065;11.23334,47.89373;10.88259,47.54795;11.4702,49.52285;11.62638,48.46008;11.98631,49.03973;11.92384,48.65348;11.52325,47.88653;11.56097,48.51048;11.08578,48.77129;11.96313,49.79663;11.7655,49.68107;10.74547,48.18696;11.35802,47.92084;11.31547,47.75373;11.29578,47.63577;11.22657,48.50509;11.31266,48.20597;11.11292,49.30903;10.91056,48.85395;11.01956,47.91374;10.63385,47.91534;11.37957,48.89772;11.23514,47.60342;11.00552,49.66307;11.09602,49.06538;10.58208,49.7129;11.08048,47.95764;10.66334,49.54911;11.99811,47.51836;11.06729,48.93516;11.51662,47.72612;11.3375,49.24294;11.90311,48.68334;11.92122,48.35409;10.87577,49.49948;11.21114,49.42086;10.87195,47.84828;10.98919,49.75728;10.6987,49.77116;11.97602,49.18479;11.94726,48.55827;10.87772,48.65176;10.52576,49.85704;10.6882,47.7531;11.05737,48.99173;11.85929,48.19158;10.84597,49.55252;11.32428,49.21267;11.3375,48.10871;11.01082,48.01289;10.7295,47.94004;11.23957,47.46262;11.80514,48.69129;11.33402,48.81084;11.42753,49.18682;10.86449,48.93515;11.28717,47.7437;11.25911,47.73088;11.18528,48.51873;11.47802,48.44734;11.84338,49.19166;11.23232,49.73384;11.73743,49.92189;10.81741,48.85105;11.96345,49.52129;11.19351,48.71871;11.64791,47.78599;11.56445,48.27622;10.86725,47.86015;10.9428,48.51327;10.84549,48.9471;11.09816,49.28497;11.33345,47.56826;11.06624,47.50943;10.83086,49.66004;10.99761,48.44961;11.1311,49.73714;11.22288,47.52466;11.57479,49.61386;10.55757,47.50472;11.35294,49.75207;11.62176,49.48058;10.69253,49.30151;11.74453,49.66988;10.73915,49.06721;10.66147,47.60705;11.96606,47.62438;11.90213,47.5219;11.23404,47.52766;10.7231,47.87407;11.84783,48.3516;11.84069,48.9192;11.58223,48.65717;10.61954,49.0955;11.88673,49.56121;11.07369,48.32443;11.96949,48.52898;11.30511,48.88208;11.87258,47.91028;11.73445,47.52629;11.63164,49.20231;10.793,49.60652;11.24185,49.92677;10.98617,49.14844;11.01802,49.40513;11.05101,48.06117;11.31047,47.69413;11.07766,49.93023;10.65685,49.11901;10.60773,49.41996;10.95122,48.79153;11.4117,49.4951;11.69883,49.37566;11.82451,49.80235;11.89671,48.40329;11.11972,48.73538;10.57045,49.41698;10.74612,49.68198;11.7765,48.09569;11.04161,49.8671;10.95917,49.12747;10.91793,47.97454;10.54507,48.82384;10.57876,48.75478;10.76263,49.23102;11.33573,47.86198;11.17414,47.88678;11.65555,49.99393;10.77196,49.83992;11.31043,48.56057;10.94094,49.59839;11.92389,49.34819;11.48685,47.73434;11.46912,48.12031;11.34077,49.13553;11.00964,48.8761;11.44085,47.5319;10.52021,49.26626;11.9019,47.65143;11.51561,49.35583;10.52268,48.85212;10.86347,48.31117;11.25467,49.7146;10.52205,48.27031;11.60986,49.48577;11.68148,49.72743;11.75741,49.81563;11.05122,49.80071;11.94883,49.63083;11.02932,48.14442;11.78529,47.96527;11.23927,49.0946;11.18084,48.08152;10.53525,48.98016;10.80783,47.72982;10.81771,47.8415;11.88089,48.06763;10.75608,49.00451;11.09311,48.08249;10.70716,49.44809;10.78888,48.51296;10.95841,49.54797;10.88098,47.45088;10.98406,49.38162;11.51657,48.7231;11.15303,49.65487;10.72106,47.63585;11.87195,48.92029;11.96055,48.30037;11.86625,48.04542;10.74646,48.1694;11.29419,49.29692;11.77349,48.38297;11.61929,49.81509;10.50075,48.2405;11.60097,47.89965;11.68649,48.05119;11.81297,49.76957;10.95531,47.60711;11.99713,48.16887;11.93346,48.57039;11.47905,49.24811;11.10927,49.67728;10.6618,48.28644;11.54246,47.59938;11.51245,49.08808;10.61251,47.46968;10.85949,48.89206;11.05734,47.81972;11.46352,47.90034;11.86184,49.53277;10.95108,49.1559;10.90376,47.7312;11.13149,48.87966;11.84491,49.03545;11.33474,48.39079;10.84499,48.69141;10.67208,49.11536;11.1539,49.54286;10.64563,48.82016;11.53696,48.23053;11.98903,49.80495;10.72386,47.45824;11.23699,48.72638;11.63742,49.29705;11.5438,49.42475;10.78755,47.52848;11.7802,47.56707;11.4288,49.54467;10.74564,49.96728;11.20549,49.71876;11.65907,48.62485;11.39492,49.18665;11.09157,48.06504;10.99312,49.61296;11.89019,48.55865;10.8119,48.04861;11.36204,48.5806;10.91501,49.12398;11.17399,48.48726;11.04085,47.92358;11.06903,48.0575;10.73976,48.30711;11.93584,49.70889;10.5968,47.72353;11.52694,49.55474;10.81782,49.40075;11.84217,48.03225;11.22768,47.98959;10.6082,49.22826;11.46862,49.02127;11.33829,48.41548;11.71223,49.19104;11.41975,48.94869;11.61803,48.96822;10.50347,47.48205;11.60651,49.06259;11.35187,47.93713;10.76661,49.42307;11.86695,49.74779;10.83986,48.57656;10.551,48.90838;11.76523,47.57251;10.55009,49.54098;11.88052,49.85604;11.83511,48.13701;11.22141,49.58347;11.56253,49.80543;11.78782,49.46533;11.57083,49.1185;10.76972,47.67795;10.88535,48.09409;11.93944,48.10515;11.67993,49.27536;11.5906,48.76538;10.63163,48.23638;11.88353,48.85089;11.7295,49.08962;10.63416,49.89273;10.72741,49.7831;11.01236,49.67462;11.04184,49.25215;11.95828,47.93858;11.22746,49.47516;11.97998,49.69944;11.75393,48.84457;11.7618,47.4792;11.51812,48.29168;11.71144,47.66958;11.6494,47.89514;11.84211,49.2127;11.34524,48.6433;10.77714,47.89613;11.40105,47.87601;11.60605,47.89686;10.88092,48.80947;11.97652,49.47796;11.33343,48.80368;11.79732,48.53003;11.55873,49.8888;11.05606,47.59527;11.93572,48.82335;11.65693,49.40953;11.78328,48.46995;11.39285,49.37936;10.84134,47.63308;11.95604,48.11911;11.18343,47.93176;11.95007,48.46438;11.50128,49.25897;11.70257,48.32708;11.79472,49.3613;11.96752,47.99815;11.69199,49.0501;11.12854,49.49936;10.78246,49.26896;11.60707,48.32754;10.65964,49.35687;11.44447,48.32026;11.60625,49.76655;10.74575,49.28163;10.89368,47.58725;11.90005,48.35195;11.95757,48.26339;11.89691,49.36058;10.50237,47.45156;11.29258,47.56293;11.26933,48.4134;11.25534,49.5038;10.92052,49.53341;11.63137,49.01404;11.13264,49.12652;11.30996,48.12275;11.3125,49.31107;11.18353,48.68894;10.52496,48.60723;11.5545,48.01744;11.28575,49.18151;10.60623,47.51181;11.82353,48.54418;10.51634,48.6607;10.54879,48.65214;11.90644,49.8917;11.32017,49.96297;11.01123,49.2065;10.50854,49.59417;10.95013,47.6372;10.73682,49.61629;11.64592,49.94222;11.45368,48.44183;10.64348,48.73343;11.57665,48.77333;11.66549,48.91217;10.71938,47.49757;11.12204,47.53107;10.91103,49.56696;11.95834,47.61315;10.61409,49.36993;11.10758,48.16737;11.19082,49.90288;11.44083,48.99006;11.13477,48.51637;10.95816,48.25012;10.55454,49.99526;10.76371,49.07832;11.60491,48.7684;11.14329,47.5639;11.76087,48.28423;11.06633,48.71908;10.63479,49.07467;11.11831,47.63672;10.51461,49.97606;11.79856,47.51325;11.60394,47.54465;11.78364,49.20929;10.99021,47.92171;10.54795,49.69804;11.9299,49.29828;10.74526,49.44884;10.79157,48.52783;10.73713,49.77397;11.61066,48.41798;11.24592,49.46019;11.33682,49.80602;11.18163,49.94106;10.99738,49.86443;11.29373,47.8579;10.71569,47.74042;10.66168,48.85985;10.88105,49.04741;11.95581,49.23316;11.15938,49.25041;11.75631,47.92095;11.66879,49.83568;11.19202,47.87232;11.26559,49.71521;11.28089,49.76687;10.68274,47.54061;10.6705,48.7493;11.11543,49.90564;11.31847,47.9317;10.98724,49.44027;10.58021,49.00669;10.65988,47.61622;11.74486,48.46057;10.53387,48.56251;11.87333,48.81495;11.86938,48.84568;11.17144,49.90105;11.96628,48.80487;11.3636,47.6345;10.80856,48.00151;11.95887,48.42017;10.84812,49.93127;11.57792,49.40493;11.24702,48.58841;10.53697,49.99427;11.55126,47.45923;11.76386,49.58956;10.72076,48.66803;11.69896,49.06466;10.87897,48.73902;11.7625,48.90216;11.41714,49.13881;10.86463,47.94817;10.84486,49.10723;11.32144,48.12671;11.72769,47.69392;10.95778,48.65459;10.7602,48.44322;11.32139,49.3135;10.75378,47.71793;11.75266,49.04782;10.64936,49.16888;11.02814,47.82052;11.67243,49.14524;10.50856,48.8563;11.35956,48.54955;10.51606,49.55741;11.73378,48.37489;11.71308,49.39755;11.36699,47.45139;10.53954,47.96051;10.8803,49.53059;11.97025,49.07966;10.71181,48.53344;11.48929,47.88551;10.78747,49.25704;11.10914,49.89777;11.66374,48.79059;11.69725,47.89964;10.65969,49.30469;11.01374,48.55548;11.85932,47.67318;11.77681,48.82834;11.91272,48.76547;11.57922,49.2744;10.52352,47.58819;11.44351,49.98144;10.8438,48.29292;11.0613,49.23531;11.75374,47.77367;10.68599,48.88375;10.8381,48.89306;11.78983,48.57816;10.86193,47.86047;11.57646,48.46316;11.6558,48.6255;11.31754,48.56296;10.69917,47.52159;11.48864,47.93436;11.14443,49.34194;10.66421,48.15455;11.69012,49.71597;10.92805,49.39795;10.98026,48.62985;10.63252,49.95795;11.37468,48.32383;11.89213,49.41892;11.39476,48.36131;10.61696,47.97164;11.50761,48.21042;11.31284,48.29539;10.87814,48.67741;10.59776,49.1498;11.17827,48.8733;11.6018,49.45914;10.73621,49.5317;11.38794,49.0252;11.40146,49.55482;10.57116,48.86615;11.89119,48.7372;11.33492,47.66424;11.61472,49.9513;11.63065,48.67025;11.15378,48.6575;11.31485,47.94025;10.55891,47.48489;11.80567,49.04565;10.74453,49.99566;10.74512,47.61681;10.91615,48.93038;11.03106,49.41157;11.66623,48.85904;10.63791,47.50966;11.38506,47.85158;11.82981,48.99957;11.20183,48.28544;11.53728,49.93828;10.84794,49.48396;11.99693,48.55948;11.37453,48.06193;10.79992,48.12021;11.85343,48.03751?generate_hints=false&sources=0&destinations=all&skip_waypoints=true"

@mjjbell
Copy link
Member

mjjbell commented Oct 25, 2022

First finding is that keep-alive seems to be broken in osrm-routed(or ab wants something unexpected from it) and that's why each request was hanging for keep-alive timeout until I commented it out.

It was working with curl when I looked into #6112, might just be some ab settings that need tweaking.

But in general @mjjbell do you think it is a problem if we will have certain slowdown here?

That does seem like quite a significant slowdown in the multithreaded test, especially in the last few percentile. Would be interesting to know where in the call stack it's happening.

Was there a reason to switch between benchmarking tools? Do you get the same results using h2load on the smaller requests, and vice-versa?

@SiarheiFedartsou
Copy link
Member Author

Was there a reason to switch between benchmarking tools?

yes, ab was just hanging on large request - not sure why, but I found it as quite buggy tool…

When it comes to other questions I will return to it a bit later - just had ~1h today and did only this :)

@mjjbell
Copy link
Member

mjjbell commented Oct 25, 2022

Not sure how Fastify works, but might want to double check it's a proper like-for-like comparison. By default routed will use the all the cores available to it.

const auto hardware_threads = std::max<int>(1, std::thread::hardware_concurrency());

@SiarheiFedartsou
Copy link
Member Author

SiarheiFedartsou commented Oct 26, 2022

Not sure how Fastify works, but might want to double check it's a proper like-for-like comparison. By default routed will use the all the cores available to it.

I applied approach explained here and it seems it helps(but I still not sure that I did everything which I could to improve things here).

h2load seems to work properly when osrm-routed has keep alive enabled, so everything is working on top of osrm-routed from master.

As you can see for whatever reason ~50% of all requests to osrm-routed are failed(most likely on network level since there no HTTP codes for them), so it is hard to properly make conclusions here. I'll try to understand why, but it is again one more evidence that existing server is not very good to work under load...

Simple route request

Benchmark

h2load --h1 -n100000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"

osrm-routed

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done

finished in 4.02s, 12773.20 req/s, 75.77MB/s
requests: 100000 total, 51400 started, 51300 done, 51300 succeeded, 48700 failed, 48700 errored, 0 timeout
status codes: 51300 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 304.30MB (319086000) total, 13.55MB (14210100) headers (space savings 0.00%), 288.26MB (302259600) data
                     min         max         mean         sd        +/- sd
time for request:      331us     95.28ms      7.75ms      2.60ms    91.16%
time for connect:     1.75ms      4.66ms      3.94ms       895us    83.00%
time to 1st byte:     8.18ms     99.36ms     39.93ms     28.02ms    60.00%
req/s           :     127.80      132.06      128.83        0.87    71.00%

osrm-routed-js with single worker

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 20.68s, 4836.66 req/s, 28.19MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 582.89MB (611200000) total, 14.59MB (15300000) headers (space savings 0.00%), 564.19MB (591600000) data
                     min         max         mean         sd        +/- sd
time for request:      577us     43.75ms     20.64ms      2.68ms    82.35%
time for connect:     1.78ms      4.27ms      3.65ms       820us    80.00%
time to 1st byte:    15.57ms     47.92ms     35.17ms     10.12ms    62.00%
req/s           :      48.37       48.67       48.44        0.06    77.00%

osrm-routed-js with multiple workers

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 10.28s, 9730.97 req/s, 56.72MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 582.89MB (611200000) total, 14.59MB (15300000) headers (space savings 0.00%), 564.19MB (591600000) data
                     min         max         mean         sd        +/- sd
time for request:      551us    152.37ms      9.72ms     16.61ms    94.40%
time for connect:     1.92ms      5.48ms      4.62ms       984us    84.00%
time to 1st byte:    15.13ms     55.42ms     34.23ms     11.94ms    60.00%
req/s           :      97.32      115.96      103.08        5.16    72.00%

"Large" request

Benchmark

h2load --h1 -n100000 -c100 "http://127.0.0.1:5000/table/v1/driving/10.90761,47.54966;11.93966,47.81606;11.08894,48.62424;11.62135,47.48357;11.5487,47.49729;11.45756,48.03065;11.23334,47.89373;10.88259,47.54795;11.4702,49.52285;11.62638,48.46008;11.98631,49.03973;11.92384,48.65348;11.52325,47.88653;11.56097,48.51048;11.08578,48.77129;11.96313,49.79663;11.7655,49.68107;10.74547,48.18696;11.35802,47.92084;11.31547,47.75373;11.29578,47.63577;11.22657,48.50509;11.31266,48.20597;11.11292,49.30903;10.91056,48.85395;11.01956,47.91374;10.63385,47.91534;11.37957,48.89772;11.23514,47.60342;11.00552,49.66307;11.09602,49.06538;10.58208,49.7129;11.08048,47.95764;10.66334,49.54911;11.99811,47.51836;11.06729,48.93516;11.51662,47.72612;11.3375,49.24294;11.90311,48.68334;11.92122,48.35409;10.87577,49.49948;11.21114,49.42086;10.87195,47.84828;10.98919,49.75728;10.6987,49.77116;11.97602,49.18479;11.94726,48.55827;10.87772,48.65176;10.52576,49.85704;10.6882,47.7531;11.05737,48.99173;11.85929,48.19158;10.84597,49.55252;11.32428,49.21267;11.3375,48.10871;11.01082,48.01289;10.7295,47.94004;11.23957,47.46262;11.80514,48.69129;11.33402,48.81084;11.42753,49.18682;10.86449,48.93515;11.28717,47.7437;11.25911,47.73088;11.18528,48.51873;11.47802,48.44734;11.84338,49.19166;11.23232,49.73384;11.73743,49.92189;10.81741,48.85105;11.96345,49.52129;11.19351,48.71871;11.64791,47.78599;11.56445,48.27622;10.86725,47.86015;10.9428,48.51327;10.84549,48.9471;11.09816,49.28497;11.33345,47.56826;11.06624,47.50943;10.83086,49.66004;10.99761,48.44961;11.1311,49.73714;11.22288,47.52466;11.57479,49.61386;10.55757,47.50472;11.35294,49.75207;11.62176,49.48058;10.69253,49.30151;11.74453,49.66988;10.73915,49.06721;10.66147,47.60705;11.96606,47.62438;11.90213,47.5219;11.23404,47.52766;10.7231,47.87407;11.84783,48.3516;11.84069,48.9192;11.58223,48.65717;10.61954,49.0955;11.88673,49.56121;11.07369,48.32443;11.96949,48.52898;11.30511,48.88208;11.87258,47.91028;11.73445,47.52629;11.63164,49.20231;10.793,49.60652;11.24185,49.92677;10.98617,49.14844;11.01802,49.40513;11.05101,48.06117;11.31047,47.69413;11.07766,49.93023;10.65685,49.11901;10.60773,49.41996;10.95122,48.79153;11.4117,49.4951;11.69883,49.37566;11.82451,49.80235;11.89671,48.40329;11.11972,48.73538;10.57045,49.41698;10.74612,49.68198;11.7765,48.09569;11.04161,49.8671;10.95917,49.12747;10.91793,47.97454;10.54507,48.82384;10.57876,48.75478;10.76263,49.23102;11.33573,47.86198;11.17414,47.88678;11.65555,49.99393;10.77196,49.83992;11.31043,48.56057;10.94094,49.59839;11.92389,49.34819;11.48685,47.73434;11.46912,48.12031;11.34077,49.13553;11.00964,48.8761;11.44085,47.5319;10.52021,49.26626;11.9019,47.65143;11.51561,49.35583;10.52268,48.85212;10.86347,48.31117;11.25467,49.7146;10.52205,48.27031;11.60986,49.48577;11.68148,49.72743;11.75741,49.81563;11.05122,49.80071;11.94883,49.63083;11.02932,48.14442;11.78529,47.96527;11.23927,49.0946;11.18084,48.08152;10.53525,48.98016;10.80783,47.72982;10.81771,47.8415;11.88089,48.06763;10.75608,49.00451;11.09311,48.08249;10.70716,49.44809;10.78888,48.51296;10.95841,49.54797;10.88098,47.45088;10.98406,49.38162;11.51657,48.7231;11.15303,49.65487;10.72106,47.63585;11.87195,48.92029;11.96055,48.30037;11.86625,48.04542;10.74646,48.1694;11.29419,49.29692;11.77349,48.38297;11.61929,49.81509;10.50075,48.2405;11.60097,47.89965;11.68649,48.05119;11.81297,49.76957;10.95531,47.60711;11.99713,48.16887;11.93346,48.57039;11.47905,49.24811;11.10927,49.67728;10.6618,48.28644;11.54246,47.59938;11.51245,49.08808;10.61251,47.46968;10.85949,48.89206;11.05734,47.81972;11.46352,47.90034;11.86184,49.53277;10.95108,49.1559;10.90376,47.7312;11.13149,48.87966;11.84491,49.03545;11.33474,48.39079;10.84499,48.69141;10.67208,49.11536;11.1539,49.54286;10.64563,48.82016;11.53696,48.23053;11.98903,49.80495;10.72386,47.45824;11.23699,48.72638;11.63742,49.29705;11.5438,49.42475;10.78755,47.52848;11.7802,47.56707;11.4288,49.54467;10.74564,49.96728;11.20549,49.71876;11.65907,48.62485;11.39492,49.18665;11.09157,48.06504;10.99312,49.61296;11.89019,48.55865;10.8119,48.04861;11.36204,48.5806;10.91501,49.12398;11.17399,48.48726;11.04085,47.92358;11.06903,48.0575;10.73976,48.30711;11.93584,49.70889;10.5968,47.72353;11.52694,49.55474;10.81782,49.40075;11.84217,48.03225;11.22768,47.98959;10.6082,49.22826;11.46862,49.02127;11.33829,48.41548;11.71223,49.19104;11.41975,48.94869;11.61803,48.96822;10.50347,47.48205;11.60651,49.06259;11.35187,47.93713;10.76661,49.42307;11.86695,49.74779;10.83986,48.57656;10.551,48.90838;11.76523,47.57251;10.55009,49.54098;11.88052,49.85604;11.83511,48.13701;11.22141,49.58347;11.56253,49.80543;11.78782,49.46533;11.57083,49.1185;10.76972,47.67795;10.88535,48.09409;11.93944,48.10515;11.67993,49.27536;11.5906,48.76538;10.63163,48.23638;11.88353,48.85089;11.7295,49.08962;10.63416,49.89273;10.72741,49.7831;11.01236,49.67462;11.04184,49.25215;11.95828,47.93858;11.22746,49.47516;11.97998,49.69944;11.75393,48.84457;11.7618,47.4792;11.51812,48.29168;11.71144,47.66958;11.6494,47.89514;11.84211,49.2127;11.34524,48.6433;10.77714,47.89613;11.40105,47.87601;11.60605,47.89686;10.88092,48.80947;11.97652,49.47796;11.33343,48.80368;11.79732,48.53003;11.55873,49.8888;11.05606,47.59527;11.93572,48.82335;11.65693,49.40953;11.78328,48.46995;11.39285,49.37936;10.84134,47.63308;11.95604,48.11911;11.18343,47.93176;11.95007,48.46438;11.50128,49.25897;11.70257,48.32708;11.79472,49.3613;11.96752,47.99815;11.69199,49.0501;11.12854,49.49936;10.78246,49.26896;11.60707,48.32754;10.65964,49.35687;11.44447,48.32026;11.60625,49.76655;10.74575,49.28163;10.89368,47.58725;11.90005,48.35195;11.95757,48.26339;11.89691,49.36058;10.50237,47.45156;11.29258,47.56293;11.26933,48.4134;11.25534,49.5038;10.92052,49.53341;11.63137,49.01404;11.13264,49.12652;11.30996,48.12275;11.3125,49.31107;11.18353,48.68894;10.52496,48.60723;11.5545,48.01744;11.28575,49.18151;10.60623,47.51181;11.82353,48.54418;10.51634,48.6607;10.54879,48.65214;11.90644,49.8917;11.32017,49.96297;11.01123,49.2065;10.50854,49.59417;10.95013,47.6372;10.73682,49.61629;11.64592,49.94222;11.45368,48.44183;10.64348,48.73343;11.57665,48.77333;11.66549,48.91217;10.71938,47.49757;11.12204,47.53107;10.91103,49.56696;11.95834,47.61315;10.61409,49.36993;11.10758,48.16737;11.19082,49.90288;11.44083,48.99006;11.13477,48.51637;10.95816,48.25012;10.55454,49.99526;10.76371,49.07832;11.60491,48.7684;11.14329,47.5639;11.76087,48.28423;11.06633,48.71908;10.63479,49.07467;11.11831,47.63672;10.51461,49.97606;11.79856,47.51325;11.60394,47.54465;11.78364,49.20929;10.99021,47.92171;10.54795,49.69804;11.9299,49.29828;10.74526,49.44884;10.79157,48.52783;10.73713,49.77397;11.61066,48.41798;11.24592,49.46019;11.33682,49.80602;11.18163,49.94106;10.99738,49.86443;11.29373,47.8579;10.71569,47.74042;10.66168,48.85985;10.88105,49.04741;11.95581,49.23316;11.15938,49.25041;11.75631,47.92095;11.66879,49.83568;11.19202,47.87232;11.26559,49.71521;11.28089,49.76687;10.68274,47.54061;10.6705,48.7493;11.11543,49.90564;11.31847,47.9317;10.98724,49.44027;10.58021,49.00669;10.65988,47.61622;11.74486,48.46057;10.53387,48.56251;11.87333,48.81495;11.86938,48.84568;11.17144,49.90105;11.96628,48.80487;11.3636,47.6345;10.80856,48.00151;11.95887,48.42017;10.84812,49.93127;11.57792,49.40493;11.24702,48.58841;10.53697,49.99427;11.55126,47.45923;11.76386,49.58956;10.72076,48.66803;11.69896,49.06466;10.87897,48.73902;11.7625,48.90216;11.41714,49.13881;10.86463,47.94817;10.84486,49.10723;11.32144,48.12671;11.72769,47.69392;10.95778,48.65459;10.7602,48.44322;11.32139,49.3135;10.75378,47.71793;11.75266,49.04782;10.64936,49.16888;11.02814,47.82052;11.67243,49.14524;10.50856,48.8563;11.35956,48.54955;10.51606,49.55741;11.73378,48.37489;11.71308,49.39755;11.36699,47.45139;10.53954,47.96051;10.8803,49.53059;11.97025,49.07966;10.71181,48.53344;11.48929,47.88551;10.78747,49.25704;11.10914,49.89777;11.66374,48.79059;11.69725,47.89964;10.65969,49.30469;11.01374,48.55548;11.85932,47.67318;11.77681,48.82834;11.91272,48.76547;11.57922,49.2744;10.52352,47.58819;11.44351,49.98144;10.8438,48.29292;11.0613,49.23531;11.75374,47.77367;10.68599,48.88375;10.8381,48.89306;11.78983,48.57816;10.86193,47.86047;11.57646,48.46316;11.6558,48.6255;11.31754,48.56296;10.69917,47.52159;11.48864,47.93436;11.14443,49.34194;10.66421,48.15455;11.69012,49.71597;10.92805,49.39795;10.98026,48.62985;10.63252,49.95795;11.37468,48.32383;11.89213,49.41892;11.39476,48.36131;10.61696,47.97164;11.50761,48.21042;11.31284,48.29539;10.87814,48.67741;10.59776,49.1498;11.17827,48.8733;11.6018,49.45914;10.73621,49.5317;11.38794,49.0252;11.40146,49.55482;10.57116,48.86615;11.89119,48.7372;11.33492,47.66424;11.61472,49.9513;11.63065,48.67025;11.15378,48.6575;11.31485,47.94025;10.55891,47.48489;11.80567,49.04565;10.74453,49.99566;10.74512,47.61681;10.91615,48.93038;11.03106,49.41157;11.66623,48.85904;10.63791,47.50966;11.38506,47.85158;11.82981,48.99957;11.20183,48.28544;11.53728,49.93828;10.84794,49.48396;11.99693,48.55948;11.37453,48.06193;10.79992,48.12021;11.85343,48.03751?generate_hints=false&sources=0&destinations=all&skip_waypoints=true"

osrm-routed

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done

finished in 91.96s, 557.83 req/s, 945.16KB/s
requests: 100000 total, 51400 started, 51300 done, 51300 succeeded, 48700 failed, 48700 errored, 0 timeout
status codes: 51300 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 84.88MB (89005500) total, 13.55MB (14210100) headers (space savings 0.00%), 68.84MB (72179100) data
                     min         max         mean         sd        +/- sd
time for request:    10.19ms       1.59s    178.28ms     33.31ms    93.39%
time for connect:     1.54ms      3.02ms      2.34ms       400us    64.00%
time to 1st byte:    25.38ms       1.59s    529.21ms    476.12ms    66.00%
req/s           :       5.58        5.65        5.61        0.02    58.00%

osrm-routed-js with single worker

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 162.75s, 614.44 req/s, 961.86KB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 152.87MB (160300000) total, 14.59MB (15300000) headers (space savings 0.00%), 134.18MB (140700000) data
                     min         max         mean         sd        +/- sd
time for request:    10.52ms    704.38ms    151.43ms     87.54ms    68.33%
time for connect:     1.83ms      9.99ms      6.81ms      3.28ms    70.00%
time to 1st byte:    59.52ms    577.92ms    226.43ms    117.34ms    66.00%
req/s           :       6.14        7.95        6.64        0.51    77.00%

osrm-routed-js with multiple workers

starting benchmark...
spawning thread #0: 100 total client(s). 100000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 154.99s, 645.21 req/s, 1010.03KB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 152.87MB (160300000) total, 14.59MB (15300000) headers (space savings 0.00%), 134.18MB (140700000) data
                     min         max         mean         sd        +/- sd
time for request:    11.93ms    372.85ms    154.69ms     37.76ms    73.76%
time for connect:     1.55ms      2.88ms      2.54ms       396us    81.00%
time to 1st byte:   144.10ms    241.59ms    158.07ms     23.55ms    93.00%
req/s           :       6.45        6.50        6.46        0.01    69.00%

@SiarheiFedartsou
Copy link
Member Author

Well, for whatever reason osrm-routed cannot process more than exactly 51300 requests from h2load:

➜  ~ h2load --h1 -n51342 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 51342 total requests
Application protocol: http/1.1
progress: 9% done
progress: 19% done
progress: 29% done
progress: 39% done
progress: 49% done
progress: 59% done
progress: 69% done
progress: 79% done
progress: 89% done

finished in 4.57s, 11235.34 req/s, 66.65MB/s
requests: 51342 total, 51342 started, 51300 done, 51300 succeeded, 42 failed, 42 errored, 0 timeout
status codes: 51300 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 304.30MB (319086000) total, 13.55MB (14210100) headers (space savings 0.00%), 288.26MB (302259600) data
                     min         max         mean         sd        +/- sd
time for request:      332us     96.41ms      8.82ms      2.97ms    90.07%
time for connect:     1.52ms      3.83ms      2.83ms       661us    63.00%
time to 1st byte:     5.05ms     99.30ms     37.19ms     29.69ms    64.00%
req/s           :     112.40      115.75      113.27        0.74    72.00%

So repeating experiment with simple route request, but with such benchmark

h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"

osrm-routed

starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 4.10s, 12184.42 req/s, 72.28MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 296.59MB (311000000) total, 13.21MB (13850000) headers (space savings 0.00%), 280.95MB (294600000) data
                     min         max         mean         sd        +/- sd
time for request:      367us     97.99ms      8.13ms      2.72ms    90.37%
time for connect:     1.59ms      3.46ms      2.72ms       533us    66.00%
time to 1st byte:     4.94ms    100.64ms     37.09ms     29.48ms    65.00%
req/s           :     121.88      126.11      122.99        1.00    69.00%

osrm-routed-js with multiple workers

starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 5.25s, 9525.13 req/s, 55.52MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 291.44MB (305600000) total, 7.30MB (7650000) headers (space savings 0.00%), 282.10MB (295800000) data
                     min         max         mean         sd        +/- sd
time for request:      590us    153.72ms      8.84ms     12.93ms    94.71%
time for connect:     1.59ms      3.27ms      2.58ms       460us    65.00%
time to 1st byte:     6.88ms     48.23ms     19.53ms     10.75ms    63.00%
req/s           :      95.28      188.74      121.78       37.68    74.00%

Results are not different, but a bit more reliable since we do not have failing requests on osrm-routed :)

@SiarheiFedartsou
Copy link
Member Author

I commented call to OSRM in JS server and started returning empty object instead and got 73919 req/s with multiple workers

➜  ~ h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 676.41ms, 73919.99 req/s, 14.59MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 9.87MB (10350000) total, 7.20MB (7550000) headers (space savings 0.00%), 634.77KB (650000) data
                     min         max         mean         sd        +/- sd
time for request:       43us     44.96ms      1.24ms      1.15ms    90.67%
time for connect:     1.57ms      4.67ms      3.45ms       963us    64.00%
time to 1st byte:     5.21ms     11.46ms      8.18ms      2.02ms    51.00%
req/s           :     740.09      914.05      806.14       62.72    69.00%

Similar change for existing osrm-routed(i.e. do not build route, but return empty json instead) gives only 38807 req/s:

starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 1.29s, 38807.19 req/s, 12.10MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 15.59MB (16350000) total, 13.07MB (13700000) headers (space savings 0.00%), 97.66KB (100000) data
                     min         max         mean         sd        +/- sd
time for request:       84us      9.23ms      2.56ms       919us    68.89%
time for connect:     1.67ms      2.82ms      2.42ms       347us    68.00%
time to 1st byte:     6.76ms      9.85ms      8.43ms       946us    57.00%
req/s           :     388.29      396.67      390.22        1.76    76.00%

I.e. it looks like this performance drop comes from our node bindings. Not sure though if it is possible to do something with it.

@SiarheiFedartsou
Copy link
Member Author

I commented code in node bindings which converts our JSON representation to node and started returning empty object instead(the rest of code is the same) and results of osrm-routed-js are now absolutely comparable with osrm-routed:

➜  osrm-backend git:(sf-osrm-routed) ✗ h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 3.88s, 12889.02 req/s, 2.54MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 9.87MB (10350000) total, 7.20MB (7550000) headers (space savings 0.00%), 634.77KB (650000) data
                     min         max         mean         sd        +/- sd
time for request:      427us     54.12ms      7.49ms      4.47ms    81.08%
time for connect:     1.77ms      3.25ms      2.55ms       406us    62.00%
time to 1st byte:     7.16ms     32.02ms     16.43ms      7.39ms    58.00%
req/s           :     128.96      147.50      133.64        4.46    84.00%

@SiarheiFedartsou
Copy link
Member Author

Just for run I've tried to substitute code which converts JSONs to V8 representation with trick when we first render it to string in C++ and then parse back by Nan.JSON::Parse and surprisingly it works faster :)

➜  osrm-backend git:(sf-osrm-routed) ✗ h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 4.73s, 10573.14 req/s, 61.39MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 290.30MB (304400000) total, 7.30MB (7650000) headers (space savings 0.00%), 280.95MB (294600000) data
                     min         max         mean         sd        +/- sd
time for request:      599us    195.46ms      8.60ms     14.42ms    95.05%
time for connect:     1.82ms      3.94ms      2.91ms       590us    64.00%
time to 1st byte:     6.03ms     40.60ms     20.85ms     10.18ms    62.00%
req/s           :     105.77      195.93      120.66       28.44    87.00%

@SiarheiFedartsou
Copy link
Member Author

@mjjbell sorry for pinging, have you seen results above? WDYT? On one hand I see ~25% performance drop, but on another hand it seems to be caused only by Node bindings overhead. In the future we can try to optimize it via doing serialization on C++ side(without conversion to JS objects).

@mjjbell
Copy link
Member

mjjbell commented Oct 28, 2022

Well, for whatever reason osrm-routed cannot process more than exactly 51300 requests from h2load:

That is suspiciously similar to the number of requests handled by a single-keep alive connection x 100 clients. Sounds like it's not playing well with the connection being closed.

short processed_requests = 512;

WDYT? On one hand I see ~25% performance drop, but on another hand it seems to be caused only by Node bindings overhead. In the future we can try to optimize it via doing serialization on C++ side(without conversion to JS objects).

25% still feels significant IMO. The tradeoff here is they're getting a production/robust HTTP server. But if you were to put an nginx or envoy proxy in front of osrm-routed, you could mitigate its limitations without too much of a performance hit. The real benefit is we're dogfooding the Node API, so it will be better maintained.

How much work is it to perform the optimization you are suggesting? And does it then defeat the purpose of using the Node API?

@SiarheiFedartsou
Copy link
Member Author

SiarheiFedartsou commented Oct 29, 2022

How much work is it to perform the optimization you are suggesting?

The idea is to just use {output: 'buffer'} when calling nodejs bindings. The difference is that in this case it returns response already serialized to JSON avoiding conversion to JS object. Implementation is quick, but we lose ability to work with this route object in JS code(the only reason why we need it at the moment is to just set code to Ok what is obviously can be done on C++ side).

Now it is better(~10700 req/s vs 9500 req/s before).

➜  osrm-backend git:(sf-osrm-routed) ✗ h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 4.66s, 10737.88 req/s, 62.47MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 290.87MB (305000000) total, 7.30MB (7650000) headers (space savings 0.00%), 281.52MB (295200000) data
                     min         max         mean         sd        +/- sd
time for request:      538us    141.06ms      8.98ms     13.89ms    94.22%
time for connect:     1.74ms      3.87ms      2.86ms       613us    59.00%
time to 1st byte:     6.99ms     38.65ms     17.97ms      8.95ms    54.00%
req/s           :     107.41      121.26      111.43        4.22    82.00%

Tbh I expected it to be a bit more efficient, so will look a bit more.

@SiarheiFedartsou
Copy link
Member Author

SiarheiFedartsou commented Oct 29, 2022

Well, in experiment above I wrongly used {output: 'buffer'} instead of {format: 'buffer'}(i.e. old approach was actually used and improvements I saw is just lucky chance). Now it is even better than osrm-routed:

➜  osrm-backend git:(sf-osrm-routed) ✗ h2load --h1 -n50000 -c100 "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 3.82s, 13091.14 req/s, 75.67MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 289.01MB (303050000) total, 6.58MB (6900000) headers (space savings 0.00%), 280.38MB (294000000) data
                     min         max         mean         sd        +/- sd
time for request:      431us    134.32ms      6.72ms      5.33ms    89.55%
time for connect:     1.78ms      6.40ms      3.54ms      1.47ms    63.00%
time to 1st byte:    14.44ms     57.42ms     30.17ms     12.08ms    62.00%
req/s           :     130.99      174.25      150.06       13.97    54.00%

More than it according to this article we can try to avoid copy here and just use Nan::NewBuffer instead, but I got crash when tried to implement it(but probably it is doable)

@mjjbell WDYT?

Anyway I want to emphasise that it is just a beginning and there are a lot of questions to be solved before we can completely remove old osrm-routed.

@SiarheiFedartsou
Copy link
Member Author

And final results for "large" request(50000 requests to mitigate osrm-routed problems + osrm-routed-js with optimizations applied):

osrm-routed

starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 78.40s, 637.74 req/s, 1.06MB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 82.73MB (86750000) total, 13.21MB (13850000) headers (space savings 0.00%), 67.09MB (70350000) data
                     min         max         mean         sd        +/- sd
time for request:    10.13ms       1.42s    155.87ms     28.20ms    95.44%
time for connect:     1.69ms      2.96ms      2.51ms       368us    75.00%
time to 1st byte:    19.60ms       1.42s    482.72ms    426.77ms    61.00%
req/s           :       6.38        6.47        6.42        0.03    54.00%

osrm-routed-js

starting benchmark...
spawning thread #0: 100 total client(s). 50000 total requests
Application protocol: http/1.1
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 80.51s, 621.06 req/s, 963.12KB/s
requests: 50000 total, 50000 started, 50000 done, 50000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 75.72MB (79400000) total, 6.58MB (6900000) headers (space savings 0.00%), 67.09MB (70350000) data
                     min         max         mean         sd        +/- sd
time for request:    11.10ms    503.20ms    137.24ms     76.09ms    65.58%
time for connect:     1.77ms      3.61ms      2.87ms       528us    64.00%
time to 1st byte:    61.09ms    417.60ms    165.10ms     76.64ms    65.00%
req/s           :       6.21        9.22        7.42        1.02    54.00%

i.e. results are ~ equal.

Benchmark:

h2load --h1 -n50000 -c100 <large request used before>

@SiarheiFedartsou
Copy link
Member Author

@mjjbell @DennisOSRM can we may be make second round of review here?

Copy link
Member

@mjjbell mjjbell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last set of numbers look good, so 👍 to have it on master and iterate on it.
Maybe target a v6 release?

Copy link

github-actions bot commented Jul 8, 2024

This PR seems to be stale. Is it still relevant?

@github-actions github-actions bot added the Stale label Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants