[PR #9835/32ccfc9a backport][3.10] Adjust client payload benchmarks to better represent real world cases #9836
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a backport of PR #9835 as merged into master (32ccfc9).
We are interested in benchmarking various payload sizes at different buffering cut offs to see the
memcpy
and buffering impacts2048 was too high as most small payloads are < 1024
32768 was two high as it always did two reads for this case so it wasn't giving us a case for a single large read.
1MiB was fine but 512KiB is also fine as it still gives us a benchmark for a multi-read case but there was no need to go that high as it didn't change the profile and only made the benchmark run longer
Note I'm testing on mac. I'm going to test on linux shortly since thats where the bulk of our users are. The breakpoints are going to be dictated by the kernel. If the linux buffer sizes are lower by default, I'll adjust the 2nd one down some more. Ideally we keep it as high as possible to see the memcpy effects but not so high that the message ends up in multiple readsThe 30000 size is good for linux as well as it results in a single read.