-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bench(bin/client): don't allocate upload payload upfront #2200
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2200 +/- ##
=======================================
Coverage 95.39% 95.39%
=======================================
Files 112 112
Lines 36447 36447
=======================================
Hits 34768 34768
Misses 1679 1679 ☔ View full report in Codecov by Sentry. |
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
When POSTing a large request to a server, don't allocate the entire request upfront, but instead, as is done in `neqo-bin/src/server/mod.rs`, iterate over a static buffer. Reuses the same logic from `neqo-bin/src/server/mod.rs`, i.e. `SendData`. See previous similar change on server side mozilla#2008.
45b4bc5
to
3e3d369
Compare
Benchmark resultsPerformance differences relative to 8e36f63. coalesce_acked_from_zero 1+1 entries: No change in performance detected.time: [98.730 ns 99.071 ns 99.431 ns] change: [-0.5820% -0.0789% +0.4516%] (p = 0.78 > 0.05) coalesce_acked_from_zero 3+1 entries: No change in performance detected.time: [117.02 ns 117.32 ns 117.67 ns] change: [-0.3193% +0.0568% +0.4126%] (p = 0.77 > 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [116.83 ns 117.18 ns 117.65 ns] change: [-0.7578% -0.0338% +0.6808%] (p = 0.93 > 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [97.383 ns 102.46 ns 113.56 ns] change: [-0.3056% +2.4571% +7.2717%] (p = 0.33 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [111.78 ms 111.83 ms 111.88 ms] change: [+0.0848% +0.1481% +0.2124%] (p = 0.00 < 0.05) transfer/pacing-false/varying-seeds: No change in performance detected.time: [27.111 ms 28.279 ms 29.464 ms] change: [-7.4351% -2.0980% +3.4613%] (p = 0.47 > 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [35.379 ms 37.109 ms 38.843 ms] change: [-8.0552% -2.1279% +4.9064%] (p = 0.52 > 0.05) transfer/pacing-false/same-seed: Change within noise threshold.time: [25.356 ms 26.226 ms 27.089 ms] change: [-10.387% -6.3220% -1.6641%] (p = 0.01 < 0.05) transfer/pacing-true/same-seed: Change within noise threshold.time: [40.413 ms 42.459 ms 44.594 ms] change: [-13.945% -7.6087% -0.2110%] (p = 0.03 < 0.05) 1-conn/1-100mb-resp/mtu-1500 (aka. Download)/client: No change in performance detected.time: [878.89 ms 887.22 ms 895.96 ms] thrpt: [111.61 MiB/s 112.71 MiB/s 113.78 MiB/s] change: time: [-2.5994% -1.2900% +0.0626%] (p = 0.07 > 0.05) thrpt: [-0.0626% +1.3068% +2.6687%] 1-conn/10_000-parallel-1b-resp/mtu-1500 (aka. RPS)/client: No change in performance detected.time: [321.69 ms 324.72 ms 327.81 ms] thrpt: [30.505 Kelem/s 30.796 Kelem/s 31.085 Kelem/s] change: time: [+0.0850% +1.4273% +2.8394%] (p = 0.05 > 0.05) thrpt: [-2.7610% -1.4072% -0.0849%] 1-conn/1-1b-resp/mtu-1500 (aka. HPS)/client: Change within noise threshold.time: [33.602 ms 33.767 ms 33.945 ms] thrpt: [29.460 elem/s 29.615 elem/s 29.760 elem/s] change: time: [-1.7666% -0.9364% -0.1071%] (p = 0.03 < 0.05) thrpt: [+0.1072% +0.9453% +1.7984%] 1-conn/1-100mb-resp/mtu-1500 (aka. Upload)/client: No change in performance detected.time: [1.7589 s 1.7795 s 1.8004 s] thrpt: [55.543 MiB/s 56.196 MiB/s 56.854 MiB/s] change: time: [-1.5228% +0.0765% +1.5900%] (p = 0.92 > 0.05) thrpt: [-1.5651% -0.0764% +1.5464%] 1-conn/1-100mb-resp/mtu-65536 (aka. Download)/client: 💔 Performance has regressed.time: [111.41 ms 111.72 ms 112.03 ms] thrpt: [892.64 MiB/s 895.09 MiB/s 897.56 MiB/s] change: time: [+1.3630% +1.7423% +2.1186%] (p = 0.00 < 0.05) thrpt: [-2.0747% -1.7125% -1.3446%] 1-conn/10_000-parallel-1b-resp/mtu-65536 (aka. RPS)/client: Change within noise threshold.time: [319.86 ms 323.35 ms 326.85 ms] thrpt: [30.595 Kelem/s 30.926 Kelem/s 31.264 Kelem/s] change: time: [+0.6058% +2.0781% +3.6378%] (p = 0.01 < 0.05) thrpt: [-3.5101% -2.0358% -0.6021%] 1-conn/1-1b-resp/mtu-65536 (aka. HPS)/client: No change in performance detected.time: [34.283 ms 34.526 ms 34.788 ms] thrpt: [28.746 elem/s 28.964 elem/s 29.169 elem/s] change: time: [-0.2086% +0.7029% +1.6727%] (p = 0.16 > 0.05) thrpt: [-1.6452% -0.6980% +0.2090%] 1-conn/1-100mb-resp/mtu-65536 (aka. Upload)/client: No change in performance detected.time: [260.98 ms 300.96 ms 352.57 ms] thrpt: [283.63 MiB/s 332.27 MiB/s 383.17 MiB/s] change: time: [-19.108% -3.0128% +16.242%] (p = 0.76 > 0.05) thrpt: [-13.973% +3.1064% +23.622%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
When POSTing a large request to a server, don't allocate the entire request upfront, but instead, as is done in
neqo-bin/src/server/mod.rs
, iterate over a static buffer.Reuses the same logic from
neqo-bin/src/server/mod.rs
, i.e.SendData
.See previous similar change on server side #2008.