-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance requests #64
Comments
I'll investigate, but the second sounds possible. As for the first one, llhttp does not modify the incoming data - it simply isn't possible for it to emit the data that isn't a part of the input. Sorry! |
Is |
https://tools.ietf.org/html/draft-thomson-hybi-http-timeout-03#section-2 On server side it's supported by nginx and more recently Node. Probably more that I am not aware of. We want to add support for it in the Node core client as well. It resolves a problem that otherwise cannot be worked around in terms of a connection close race between server and client. |
Excellent! Thank you for sharing the references. What do you think about the following plan: I'll make this change in a branch and we'll see how much difference it makes for undici. If there is a performance improvement for it - we'll polish the feature and release it, otherwise - we'll call the experiment a failure. Does this sound reasonable? |
Sounds great. Thank you for helping out on this. |
Parse timeout from `Keep-Alive: timeout=...` header and return the timeout value through `llhttp_should_keep_alive` API call. Note that with this change `llhttp_should_keep_alive` can return either: * 0 - shouldn't keep-alive * -1 - should keep-alive, but no timeout is available * positive integer - timeout specified by remote end Fix: #64
@ronag could you give a try to this branch please? |
Sure. Just need to figure out how to make a node build with it. |
Oh, right. Sorry! I'd clone the repo, checkout the branch, run |
Hm, I did that, copied to deps/llhttp and overwrote the old files and confirmed they contain the changes. Did a |
Found it! |
You have to patch up node so that it makes it an integer. Let me see what change would do it. |
Oh, you found it! Great! |
hm, I'm unable to get useful results... running the built version of Node is significantly slower (1.5x) than |
Ok, now I ran the build without the changes from the branch and the performance is back. Is it expect that should have such significant performance impact? Maybe I'm doing something wrong. |
I don't think there should be significant performance issues... (the benchmarks in llhttp are unaffected by the change as far as I can tell.) It is likely that something just went wrong with the build. |
LLPARSE_DEBUG? edit: no I don't think that's it |
It should not be used for release builds if this is what you are asking about. |
No I thought maybe I had to pass a flag to disable it. But noticed it's disabled by default. Hm... I'm not sure what to do. |
Did you compile Node.js master or the same version as what you are using through nvm? Also, why do you run |
I complied Node with and without the changes (no nvm this time). The perf diff occurs.
I have no idea. Someone recommended using --ninja and it seems to make a difference with make. I am not very good with build stuff. |
undici$ time /Users/ronagy/GitHub/nxtedition/node/out/Release/node llhttp_bench.js
real 0m9.084s
user 0m4.844s
sys 0m2.387s
undici$ time /Users/ronagy/GitHub/nxtedition/node/out/Release/node llhttp_bench.js
real 0m7.161s
user 0m2.085s
sys 0m0.636s First one with llhttp changes, second one without |
Could you share the |
Sorry! I think I made a mistake in undici. Digging into it now. |
Ok, here we go: First one is with the changes and second one without: undici$ time /Users/ronagy/GitHub/nxtedition/node/out/Release/node llhttp_bench.js
real 0m7.080s
user 0m2.045s
sys 0m0.632s
undici$ time /Users/ronagy/GitHub/nxtedition/node/out/Release/node llhttp_bench.js
real 0m7.391s
user 0m2.076s
sys 0m0.642s
undici$ It's a bit of an unrealistic benchmark since the test server is responding with 1000 headers but it does show there is a diff. |
So... it is better? |
yes, ~6% better |
Looks about the same with sufficient number of runs:
Here's the modified benchmark script: https://gist.github.com/indutny/b972581054a30d0773497a94845fa9aa At least we didn't make it slower :D |
That being said, I'm looking forward for seeing how this benchmark will run on modified undici that leverages the parsed header value. |
modified n=1 mean=33411.293 stddev=0.000
n=2 mean=34749.674 stddev=1338.381
n=3 mean=35792.712 stddev=1835.765
n=4 mean=36519.457 stddev=2027.806
n=5 mean=37080.292 stddev=2132.544
n=6 mean=37532.423 stddev=2193.604
n=7 mean=37748.826 stddev=2098.920
n=8 mean=37959.245 stddev=2040.763
n=9 mean=37956.692 stddev=1924.064
n=10 mean=37967.646 stddev=1825.623
n=11 mean=37959.568 stddev=1740.851
n=12 mean=37920.880 stddev=1671.669
n=13 mean=37985.404 stddev=1621.567
n=14 mean=38082.097 stddev=1601.000
n=15 mean=38191.946 stddev=1600.393
n=16 mean=38336.339 stddev=1647.397
n=17 mean=38347.190 stddev=1598.799
n=18 mean=38441.682 stddev=1601.856
n=19 mean=38572.819 stddev=1655.426
n=20 mean=37980.719 stddev=3043.760
n=21 mean=37886.879 stddev=2999.905
n=22 mean=37581.219 stddev=3248.439
n=23 mean=37181.379 stddev=3689.274
n=24 mean=36798.962 stddev=4050.582 unmodified n=1 mean=35701.535 stddev=0.000
n=2 mean=35759.077 stddev=57.542
n=3 mean=36724.322 stddev=1365.871
n=4 mean=37052.561 stddev=1312.412
n=5 mean=37700.066 stddev=1747.854
n=6 mean=37308.085 stddev=1820.460
n=7 mean=36842.409 stddev=2035.132
n=8 mean=36658.756 stddev=1964.724
n=9 mean=36768.961 stddev=1878.403
n=10 mean=37049.327 stddev=1970.533
n=11 mean=37152.343 stddev=1906.863
n=12 mean=37245.497 stddev=1851.640
n=13 mean=37096.669 stddev=1852.196
n=14 mean=36297.388 stddev=3389.784
n=15 mean=35838.924 stddev=3696.925
n=16 mean=35399.625 stddev=3963.307
n=17 mean=35454.777 stddev=3851.296
n=18 mean=35349.347 stddev=3767.946
n=19 mean=35047.386 stddev=3884.770
n=20 mean=35126.518 stddev=3802.084
n=21 mean=35080.161 stddev=3716.241
n=22 mean=35016.579 stddev=3642.471
n=23 mean=34773.268 stddev=3740.742
n=24 mean=34615.171 stddev=3739.650 |
Could you run it for longer time @ronag ? The standard deviation is quite significant and given low Could you also share the node.js patch that you are using? I'd like to do the benchmarks runs on a beefy server as well. |
My computer is useless for benchmarks unfortunately. Have the same problem with undici benchmarks. |
Thanks! I'll give it an extended run and report back to you. |
Here are the results:
Looks like there is no significant improvement, and in fact it is worse than with unpatched node. |
That's interesting. I guess doing the keep-alive parsing is faster in Javascript. Weird. Thanks a lot for trying this. Much appreciated. I might dig into this at some point in the future to try and understand why it's slower. |
Sure thing! I'm keeping the branch in llhttp for now. Thanks! |
Would be nice if:
A flag that would cause llhttp to automatically lowercase response headers, so that higher level libraries do not need to perform an additional iteration over headers.
Parse and include "keep-alive: timeout (\d+)s" in
kOnMessageHeadersComplete
, e.g. by changing the type ofshouldKeepAlive
to an integer, so that higher level libraries do not need to perform an additional iteration over headers.Refs: https://github.com/mcollina/undici/pull/337/files#diff-50cfa59973c04321b5da0c6da0fdf4feR542-R566
The text was updated successfully, but these errors were encountered: