-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster MethodInfo.Invoke #7560
Comments
You are really just asking for faster |
Sounds good. That would solve world hunger. Can you elaborate on why methodinfo.Invoke is slow today? What's the difference between |
|
@jkotas Would it make sense to change the existing methodInfo.Invoke? Compatibility concerns etc?
Is there any concern with storing the "precompiled" state? I'd like to play around with this 😄 |
It does not make sense to introduce new APIs to fix perf issues in the existing APIs. Otherwise, we would end up over time with Invoke, InvokeFast, InvokeFaster, InvokeFastest, ... .
Yes, it takes extra care to ensure that you do not break anything.
I did not mean to AOT compile, just compile it in memory on demand. It can be regular DynamicMethod. You can take a look how it is done in CoreRT: https://github.com/dotnet/corert/blob/master/src/Common/src/TypeSystem/IL/Stubs/DynamicInvokeMethodThunk.cs |
I wasn't talking about AOT compilation, I was just asking if storing a dynamic method or whatever the extra state was on the
Thanks! Now I just need to grok it and turn it into C++. BTW are you suggesting we generate a dynamic method (via whatever mechanism is available) when creating method infos or would be a first time you invoke thing? (probably the latter) |
I saw that |
You do not need to. It can be in C# just fine. |
RuntimeMethodInfo has 12 fields already. Adding a 13th one should not be a problem; and if it is a problem - some of the existing 12 fields can be folded together. |
Would this be something that is opt-in or the default behaviour? There may be some scenarios where the extra overhead of time needed to create the delegate and then space to store it, could outweigh the cost of just using |
Also something similar was asked before, see #6968 |
Right, it would need to be done only once the runtime sees the method invoked multiple times to be beneficial. |
Thats why I prefer the 2 stage approach, getting the method info then preparing it to get something back that is optimized for invocation. You pay that cost explicitly and up front. It also gets around any compatibility concerns around
We do something similar in our dependency injection system. After 2 invocations, we fire off a background thread for compilation so the next time it's faster. |
Then this does not need to be built into the runtime. It can be regular upstack nuget package, e.g. the Thunk method can be extension method. |
Part of the reason for baking this into the runtime was so that the implementation could adapt on AOT platforms... |
nuget package can adapt on AOT platforms too. |
I think the hard work in https://github.com/aspnet/Common/blob/rel/2.0.0/shared/Microsoft.Extensions.ObjectMethodExecutor.Sources/ObjectMethodExecutor.cs could be turned to a NuGet package meanwhile. Especially since that is dreaded |
@Ciantic Wouldn't Reflection.Emit yield better performance than Expression, the latter of which is used internally in ObjectMethodExecutor? |
@danielcrenna I am not a right person to evaluate the performance. However since I wrote that, I was given advice that those source packages are usable, one just have to include them in the own project. I think there is even a tool or command in dotnet to include those source packages in own projects. |
There's a lot to unpack here, so I'll do my best to try. :) First, if you know the signature of the method you're trying to invoke, the easiest codegen-free way to get a thunk to it would be via The runtime offers multiple features now that weren't available when aspnet first started using ref emit back in the MVC 1.0 days. Back in MVC 1.0, one of our primary reasons for using ref emit was to avoid having exceptions wrapped in For environments without access to ref emit or where ref emit is interpreted (thus potentially slow), the APIs described at #25959 can be used to check for this condition. This allows callers to know ahead of time whether using ref emit at all would result in reduced performance. There are other outstanding issues to improve the performance of Finally, it appears the thunk mechanism described here is opinionated. At an initial glance it seems the desire is for this thunk not to provide support for in / ref / out parameters or type coercion. It's also not really defined how overload resolution would work. This would ultimately have the effect of creating a parallel reflection stack whose behaviors don't necessarily match the existing stack's behaviors. We would not be able to guarantee that the behaviors of this parallel stack will match the behaviors people will want 5 years down the road as new programming paradigms are introduced. I don't think this is a direction we want to go in the runtime. My recommendation would be to continue to use ref emit if you're in a suitable environment and if it meets your performance needs. If ref emit is unavailable, try using the standard reflection APIs and passing whatever flags are appropriate for your scenario. If you need different behaviors than typical reflection allows, that's a good candidate for creating a standalone package which implements your desired custom behaviors. |
I mean, we may as well close this issue if the answer is to keep using ref emit 😄 . |
Well, ref emit isn't the only option. If you're performing AOT compilation, you could emit the thunk directly into the compilation unit. public class MyController
{
public int Add(int a, int b)
{
/* user-written code */
}
internal static object <>k_CompilerGeneratedThunk_Add(object @this, object[] parameters)
{
return ((MyController)@this).Add((int)parameters[0], (int)parameters[1]);
}
} The runtime could use standard reflection to discover these thunks and link to them via |
FWIW, .NET Native/CoreRT does pretty much that. Last time I was looking at it, reflection invoke in .NET Native was about 4x faster than CoreCLR. The AOT generated stubs are a bit more complex than what's suggested here because they handle the annoying things like automatic widening, |
I think the general idea behind this, but I could be wrong, was that MVC's
MethodExecutor isn't using Ref Emit, it's using expressions.
I replaced my use of it with Ref Emit and it is faster (on my machine, on
my benchmarks, etc.). Question is whether it's worth applying.
My feeling is yes, because it's an area where it's not really possible to
replace it externally without reinventing the wheel on a large portion of
the model binding, and because benefits here apply to everyone who uses MVC.
It's just as valid to say no, because it's fast enough already.
…On Fri, Apr 17, 2020 at 3:07 AM Michal Strehovský ***@***.***> wrote:
Well, ref emit isn't the only option. If you're performing AOT
compilation, you could emit the thunk directly into the compilation unit.
FWIW, .NET Native/CoreRT does pretty much that. Last time I was looking at
it, reflection invoke in .NET Native was about 4x faster than CoreCLR.
The AOT generated stubs are a bit more complex than what's suggested here
because they handle the annoying things like automatic widening,
Type.Missing, and the like. It can be faster if we can avoid these
conveniences.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7560 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA7USV3T3NIPO6NAFKYCKTRM753FANCNFSM4MKHYDAQ>
.
--
Daniel Crenna
Conatus Creative Inc.
cell:613.400.4286
|
@MichalStrehovsky how are those stubs discovered at runtime? |
Via the CoreRT-specific native AOT metadata. |
Assuming #45152 supersedes this. |
When writing a framework that dispatches to user methods it's quite common to reflect over all methods of a particular shape on an object and store a
MethodInfo
for later use. The scan and store is typically a one time cost (done up front) but the invocation of these methods happen lots of times (one or many times per http request for example). Historically, to make invocation fast, you would generate aDynamicMethod
using reflection emit or generate a compiled expression tree to invoke the method instead of usingmethodInfo.Invoke
. ASP.NET does this all the time, so much so that we now have a shared component to do this (as some of the code can be a bit tricky).With the advent of newer runtimes like project N and coreRT, it would be great if there was first class support for creating a thunk to method. This is what I think it could look like:
Few notes:
The text was updated successfully, but these errors were encountered: