-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compiled Lambda Expressions and AOT #17973
Comments
cc @VSadov That is correct for UWP the expressions are interpreted and not compiled at runtime. |
@JamesNK I know you'll hate this answer, but cross-compiling can ensure that each platform gets the most optimized version for it. With |
This needs to be a runtime light up API that is netstandard. The project N toolchain and the JIT can treat it as an intrinsic and eliminate it at runtime. |
+1 This is a HUGE performance issue for us! |
That would require knowing every platform that doesn't support compiling expressions and having a different version in the package for each of those platforms, UWP, iOS, Android, Unity, etc And as new platforms come out they would get the slow netstandard binary until someone explicitly adds a different version in the package. Or we could have a simple property that library authors can use to adapt the netstandard assembly to the platform at runtime. One assembly that works for now and forever. |
Agree - hardcoding the list of platforms is fragile. It does not work well for new platforms. Also, the answer can differ for given platform over time; or both interpreter-only and compiled System.Linq.Expressions can be available for some platforms. Having an API that returns whether expressions are compiled sounds pretty reasonable. |
Since now any expression can be asked to be compiled for interpretation, the API would need to take a compiled expression. Would something like the following work for you? /// returns true if d is a delegate that represents an interpreted expression
bool IsInterpretedExpression(Delegate d) |
I read this as more an environment query; code being able to know if calling Compile(false) is going to result in interpretation or not. |
Yes, I see it as an environment query.
|
Yes. It needs to be before not after. Framework code would likely switch logic altogether and use reflection invoke vs expression tree compilation. |
Man, this really burned me - I simply ported some really well working code from an "old" style codebase to UWP and this absolutely hammered me... slowed down by roughly 20x. +1 to fixing this... please. |
Another person being hurt by this - JamesNK/Newtonsoft.Json#968 (comment) |
Should such a property live in S.L.Expressions or perhaps elsewhere? Am I right in thinking that a complementary |
I think so. |
It would be simple enough to add: namespace System.Linq.Expressions
{
public abstract class LambdaExpression : Expression
{
public static CanCompileToIL { get; }
}
} (Assuming I'm correct above that there couldn't be a case where But could a similar question, which boils down to "is |
This is not the same question. Consider some of the following situations:
|
Cool. That's enough variance to argue well that the property should live in S.L.E. Shall I open the above as a separate API proposal? |
Yes, this needs to live in S.L.E. /// returns true if d is a delegate that represents an interpreted expression
bool IsInterpretedExpression(Delegate d) We already need to detect interpreted lambdas inside the interpreter itself. Would not be surprised if someone else needs this. Interestingly, CanCompileToIL could actually be implemented in terms of IsInterpretedExpression - compile and cache trivial delegate and then just ask if it is interpreted or not. |
We have I'm inclined to think of |
Are we going to get this API anytime soon? |
@weshaggard , This is along the same lines as the "capabilities" api we've talked about in the past. Can you share your thoughts? |
Yes this is a specific instance of a capability API which I think is best solved with the suggestions provided by @VSadov and @JonHanna in this issue by exposing an API directly in System.Linq.Expressions.
It needs a formal API proposal and the owner @VSadov to drive it through the process. Given this will be a new API it will only initially be added to netcoreapp and uap, as it cannot be added to netstandard yet, so its usage will be limited for a while. For that reason to solve this in the short term the best answer is to cross-compile as @onovotny suggested. |
You can't cross compile for CoreRT. |
Why not? Lets not derail this issue but it is my understanding corert would be represented with its own RID. See dotnet/corefx#14142. |
We don't want upstack libraries that wouldn't have to cross compile to have to cross compile just for this do we? |
Ideally no, but if they need to optimize for a the corert platform specifically then they would have too. For this case assuming they target netcoreapp and we expose this new API there they wouldn't need to cross-compile for corert, they could just have a netcoreapp asset and do the runtime detection based on this new API. |
The standard way to avoid cross compilation in these cases is to do light-up using reflection. Look for the (capability) API and use it if it exists. Otherwise, use a fallback logic that is not as good. |
Perfect! What do we look for? This new API that's going to be added? |
@weshaggard do we have an issue tracking an API to detect whether the capability to emit IL is available or not? |
No and there aren't currently any plans to do that as @jkotas called out there isn't a great way to answer that question. The API to look for would be whatever API we add for this issue. |
@VSadov can we take this to the next level? |
Hi, We are still being burned heavily by this but I fear any fix is now too late for us. Our UWP code is running on (enterprise) W10M devices and so any new API likely to require an OS release we will never get (for example we cannot get FCU because it was never released for mobile). Please correct me if I'm wrong and this might be shippable in a new framework version that works on older OS'? I suspect our only option now is to make a private fork of Json.Net and modify it to force an alternate strategy for the reflection/expression based code. We'll have to decide internally whether that pain would be worth it or not, given we are looking at moving to Xamarin.Android since WM10 is basically dead. Unfortunately we have a large number of deployed devices we need to continue supporting for several years so it's not simple either way. I understand limited resources and other priorities, but for us personally it is a shame this didn't get resolved quicker. Despite the probability a fix would do us no good, I think it would still be a good idea to resolve this in both .Net/UWP and Json.Net for other platforms. Thanks to everyone who has participated in trying to get this sorted. |
System.Linq.Expressions currently offers multiple build time definitions to build different flavors of the library: * `FEATURE_COMPILE` (defined everywhere except iOS/tvOS/Catalyst, and NativeAOT) - enables Reflection.Emit-based expression tree compiler. * `FEATURE_INTERPRET` (always defined and not actually possible to build without) - enables the expression interpreter * `FEATURE_DLG_INVOKE` (defined everywhere except NativeAOT, but we likely need to be able to run without it on iOS too because there's uninvestigated bugs around it ActiveIssue'd in dotnet#54970) - in the interpreter, use a delegate to invoke `CallInstructions` instead of `MethodInfo.Invoke`. The delegate might have to be Reflection.Emitted if it's not supportable with `Func`/`Action` (that's the uninvestigated iOS/tvOS/Catalyst bug). For dotnet#61231, we need to be able to build a single System.Linq.Expression library that can switch between these build-time configurations _at publish time_ since we don't want to build a separate S.L.Expressions library for NativeAOT. There are advantages in having this setup for non-NativeAOT scenarios too. This pull request accomplishes that by mechanically changing the `#define`s into feature switches. The feature switch is placed in the last proposed location for dotnet#17973. I expect we'll want such API to be public at some point; now that Xamarin and NativeAOT use this formerly .NET Native-only thing the API request became relevant again. This pull request is focused on the mechanical replacement of `#defines` with feature switches and it's already a lot bigger than I'm comfortable with. There's some obvious "`!FEATURE_COMPILE` means this is .NET Native with everything what that meant" that I did not touch because this is meant to be a mechanical change. Some cleanup will be needed at some point. Right now this just mostly means we're running fewer tests than we could. Validation: * Verified that we're still running the same number of tests with CoreCLR as we previously were and they're all passing. * Verified we're getting mostly the same size of the S.L.Expressions library on iOS (433 kB grew to 437 kB, the diffs are expected). * Verified things work on the NativeAOT side as well.
System.Linq.Expressions currently offers multiple build time definitions to build different flavors of the library: * `FEATURE_COMPILE` (defined everywhere except iOS/tvOS/Catalyst, and NativeAOT) - enables Reflection.Emit-based expression tree compiler. * `FEATURE_INTERPRET` (always defined and not actually possible to build without) - enables the expression interpreter * `FEATURE_DLG_INVOKE` (defined everywhere except NativeAOT, but we likely need to be able to run without it on iOS too because there's uninvestigated bugs around it ActiveIssue'd in #54970) - in the interpreter, use a delegate to invoke `CallInstructions` instead of `MethodInfo.Invoke`. The delegate might have to be Reflection.Emitted if it's not supportable with `Func`/`Action` (that's the uninvestigated iOS/tvOS/Catalyst bug). For #61231, we need to be able to build a single System.Linq.Expression library that can switch between these build-time configurations _at publish time_ since we don't want to build a separate S.L.Expressions library for NativeAOT. There are advantages in having this setup for non-NativeAOT scenarios too. This pull request accomplishes that by mechanically changing the `#define`s into feature switches. The feature switch is placed in the last proposed location for #17973. I expect we'll want such API to be public at some point; now that Xamarin and NativeAOT use this formerly .NET Native-only thing the API request became relevant again. This pull request is focused on the mechanical replacement of `#defines` with feature switches and it's already a lot bigger than I'm comfortable with. There's some obvious "`!FEATURE_COMPILE` means this is .NET Native with everything what that meant" that I did not touch because this is meant to be a mechanical change. Some cleanup will be needed at some point. Right now this just mostly means we're running fewer tests than we could. Validation: * Verified that we're still running the same number of tests with CoreCLR as we previously were and they're all passing. * Verified we're getting mostly the same size of the S.L.Expressions library on iOS (433 kB grew to 436 kB, the diffs are expected). * Verified things work on the NativeAOT side as well.
Is this still an issue with the .NET 7 AOT stuff? Are compiled lambda expressions in a code generator still interpreted? |
Yes, lambda expressions are still interpreted with .NET 7 native aot. Native aot does not have a JIT so it is by design. Libraries that want to run great on native AOT should avoid lambda expressions. |
Thanks! |
Is the fallback interpreter really that slow, that switching to reflection (which isn't the fastest as well) would speed up things significantly in the majority of cases? |
Expression tree interpreter uses reflection to invoke methods. It means that expression tree is always slower than reflection when everything is equal. The slowdown is proportional to the amount of interpreted code. |
I just wanted to point out, as in the referenced in the Newtonsoft issue linked in the OP, it's not just that 'compiled expressions' fallback to reflection, but that the internal reflection mechanisms in the runtime used by AOT UWP appear to be "worse" for performance than say the old Xamarin ones, due to less internal caching. So you don't get the perf benefit of 'compiled expressions', but you also get worse performance than .Net on desktop/Xamarin even when using reflection (apples to apples comparison). Or at least that was my experience, and I was told the lack of internal caching of reflection types/data was to blame. |
Thanks for clarifying this. I guess I have to do some tests myself as I'm mainly interested in NET7 AOT and not so mich in the Mono or Xamarin stacks. I'm not quite sure what do to when results are unpleasant. Code generation is not a proper alternative in many situations. |
I was expecting an exception when
|
Is it correct that compiled lambda expressions on UWP aren't really compiled and are instead interpreted? If so does that mean that on UWP and other AOT platforms like Xamarin/Unity with no support for IL emit it is faster to use MemberInfo and ConstructorInfo?
Related: JamesNK/Newtonsoft.Json#968
If normal reflection is faster on AOT platforms I would like to know is if there is a way at runtime to discover that IL emit isn't supported?
A way to discover this capability at runtime will allow libraries to use compiled lambda expressions to boost on platforms it is available, and to fallback to traditional reflection on platforms where it isn't, while still using a single netstandard assembly.
The text was updated successfully, but these errors were encountered: