-
-
Notifications
You must be signed in to change notification settings - Fork 746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark dotnet and comprehensive set of end to end benchmarks #1175
Add benchmark dotnet and comprehensive set of end to end benchmarks #1175
Conversation
hmm seems I need to make some adjustments as it is complaining about the benchmark that returns 'Task<IEnumerable>`. Ok, it appears I've fixed that problem now with #c4f35a7 by making the return type a |
…test against only 3 orders of magnitue
Interesting. I found a memory leak in the benchmarks themselves. Turns out when I moved the |
hmm reading the docs more I'm looking at adding the necessary things in order to be able to easily control which sets of benchmarks you want to run, as the supplied set of [Params] runs an absolute tonne of benchmarks. Ok - I've update it so that you can pass a --filter to specify which test and --runtimes to control which runtimes and added a script to run each set of benchmarks. In addition I've reduced the number of benchmarks each one of those scripts run to be an acceptable amount of time / information. Think it's ready! |
…s for the args supplied
…by returning static payloads instead as no state hangs around between runs
|
||
<PropertyGroup> | ||
<OutputType>Exe</OutputType> | ||
<TargetFramework>netcoreapp3.1</TargetFramework> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be multi-targeted to support benchmarks on .NET Core 3.1, .NET 5 & 6?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BenchmarkDotNet has excellent support built in for running benchmarks on different runtimes. I don't think it actually requires that we multi-target it here. I believe the two ways to do it are either configure the benchmark via code either through a custom config or simply by adding an attribute to the benchmark class for each runtime you want it to run the benchmarks for. Or it lets you supply a command line argument for which runtime you want to run the benchmarks against. The docs for the latter are here https://benchmarkdotnet.org/articles/guides/console-args.html#runtimes
So, that's supported as is. Just need to pass the right arguments I think. I think under the hood to run each benchmark it actually is compiling that into a class etc, that's how it's able to support targeting different runtimes in that way.
I originally had it hardcoded as an attribute on the benchmark class itself, but then that mandates running the benchmarks for all the specified runtimes which can take quite a while if you're targeting a few of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upon playing with this it did indeed require me to multi-target it. My initial understanding of it wasn't quite right. I've added the multi-targeting and subsequent scripts to run the benchmarks for each runtime.
if (string.IsNullOrEmpty(fileName)) | ||
throw new ArgumentNullException(nameof(fileName)); | ||
|
||
responsePayload = File.ReadAllText(fileName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this handler be reading the file from disk each time? Or should we preload the files into memory during the setup stage and returned cached bytes?
That would prevent the tests from being affected by disk I/O.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This runs as part of [GlobalSetup], and this disk access happens in the constructor, so it runs only a single time, and isn't having to hit the disk and do I/O on each iteration during the benchmark. During the benchmarks it's just taking the string which it's got in memory and returning it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but a few notes
Hi, just looking to see where this is at -- would love to get some benchmarks into Refit! :) |
Hi @clairernovotny sorry it's taken a while to respond. If you can provide some additional guidance on recommended changes I can look at finishing this off as it's pretty close. I responded to your initial comments but didn't get any subsequent feedback, so wasn't sure what to change to get it approved and then life happened. I suspect possibly including a set of shell scripts for each runtime to run the benchmarks against might do it? The framework itself handles the multi-targeting and you just need to provide the runtime as an argument, hence if there were a set of scripts we could run for benchmarking against each runtime then that would cover the multi-targeting aspect. |
@clairernovotny I've added added the multi-targeting and subsequent scripts in, so this should be good to review again. I only included The results are pretty interesting. |
This was one of the first things I went to try and find to see if this would be good for my use case, as without this sort of information, it's difficult to figure out what the trade-offs are compared to hand creating a client. Ideally the Readme would include a little table summary of the output of these sorts of tests comparing with the hand crafted or other libraries that do a similar thing (if there are any). |
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
What kind of change does this PR introduce?
This PR adds a full set of end-to-end benchmarks using
BenchmarkDotNet
for all of the return types Refit supports parameterized with different payload sizes (several orders of magnitude), serialization methods, http verbs, and status codes.Inspiration was taken from the stale PR already open for this here #898 which helped in getting passed the initial hurdles of getting it running.
What is the current behavior?
Currently there are no benchmarks.
What is the new behavior?
We now benchmark all the things! Yay.
What might this PR break?
Nothing. It's a new
csproj
added to the solution.Please check if the PR fulfills these requirements
Other information:
@Sergio0694 With the original PR for this - turns out you can't set the mock expectation in the
[GlobalSetup]
method as it seems it gets garbage collected, which causes Refit to returnHTTP 404
. Once I moved that into the benchmarks themselves, it all worked. I profiled just that setup to understand what overhead it adds and it's negligible.@clairernovotny I didn't update the docs yet with the results of the benchmarks, they are pretty hardware specific and thought you might like to select some sort of standard hardware configuration to run it on. Though, I will certainly post my set as a comment here once they finish running. Takes a fairly long time.