-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lightweight and dynamic driving of P/Invoke #10759
Comments
I agree that capability like this would be good to add. It is something we have been thinking about. There are two possible designs:
My observation is that the resolve event seems to be more flexible: One can implement the registration APIs on top of the resolve event, but one cannot implement the resolve event using the registration APIs. Also, the resolve event may have better performance characteristics because of it is lazy. @migueldeicaza Do you have opinion about registration API vs. resolve event? |
We are pulling through a design for this feature right now. |
I am more much inclined to have an event mechanism. I have always loathed DB connection strings and having a DSL for this kind of thing quickly gets out of control. An event API for loading avoids these issues and users can perform any logic they desire without waiting for an update to the DSL. |
If an event system is used similar to ResolveAssembly, possibly ResolveDllImport? what is it going to return to the caller? ResolveAssembly expects a loaded and resolved assembly but without some sort of wrapper around a library are we limited to returning the path? If so do we want to just hand off a path to the runtime or would some ability to track the lifetime of what we pass back be useful? For example if you know that your library is going to be needed for a couple of calls at setup you might want to allow that library to be unloaded and I'm not aware that there's any way to do that currently and with a path it wouldn't be possible in future. |
The event resolving system is nice, but also too cumbersome for most uses. Perhaps we can provide a simple API on top of it? |
+1 to @migueldeicaza approach. Our current design enables eventing in the runtime and we plan providing a higher level api for ease of use. |
@migueldeicaza What part of an eventing mechanism is too cumbersome? Eventing is a well known mechanism in C# and as @Wraith2 pointed out the @Wraith2 The event signature itself would need to be iterated on, but I can foresee an approach that has the handler set the path and/or the name of the function to use. Given these arguments the runtime would respect those and just go with it. |
I think the handler should return the native library handle ( |
If this kind of API is going to be added, are there any thoughts on adding a slightly more flexible API that allows the resolver to return the actual address of a function instead? For example, on Windows this could be accomplished by |
@jkotas The whole native library thing just doesn't seem to be the real problem - at least how it has been explained to me. The issue is library path discovery, which is really 'Find this library path using CLR look up logic'. That issue could be solved with a Path API that offers up a path computed by the system and is similar in spirit to Overall I see little value in providing a native library loading API when all user appear to be after is "find this like the CLR does". It is lower level and I appreciate that, but it does provide a solid v1 and IF we get a large contingency of users that want a |
@migueldeicaza as @jkotas said in his first reply there are two ways to do it and the declarative approach can be built on the eventing approach so I assumed it would be done that way providing both. My only problem with the extended declarative syntax is that I don't like magic interpreted strings. I'd rather that logic be split out in some way. Perhaps something like I thought that being unable to unload a native library one it was no longer needed would be a desirable possibility which isn't needed for this issue but could be developed in future. The NativeLibrary which wraps a library with a default implementation in the PAL for each supported platform and exposes the module handle as an Inptr or similar would be a step towards enabling this. |
There are the simple cases that are just about the path discovery, and then there are the more complex cases. I have seen the more complex cases number of times. Here are a few examples:
We do have this API already as part of AssemblyLoadContext. We should consider how to reconcile what we have already with this design. |
@jkotas I do remember commenting on dotnet/coreclr#18628 and that issue should definitely be addressed. The probing issue could be addressed by the path API without much issue. Fully agree with reconciling with |
Oh please, yes. Really the only problem that needs to be solved (IMO) is that DllImport path is constant, and I want to create that string at runtime. Sometimes I want to select between a 32-bit and 64-bit lib based on the current runtime. Other times, I want to use different library names on different OSes / distros. I don't care if you use callbacks or Miguel's suggestion, but just please let me generate that string at runtime. |
While I used dllmap with Mono in the past, I must say I havn't looked back since netcoreapp/net471 sdk based projects added support for nuget packages with native depedencides/managed dependencies per RID (runtimes/), as those supercede the functionality dllmap had to ever offer... What would be the motivation of using this over the current nuget/native package? |
On Linux it is more appropriate and customary to use system-provided shared libraries. Resolving the correct one though often has to happen at runtime, and can’t be hard coded into a dllimport.
In Windows, I have a managed lib that selects either a 32-bit or 64-bit DLL at runtime. Currently I’m setting the PATH variable and putting them in separate folders - a hack that only works on Windows and isn’t solved by nuget.
|
But is IS solved by (newer) nuget, all of it... |
That is a compile time solution which doesn't enable single package multiple environment deployments. It's a solution to a different problem. |
I am not sure I follow: you can use this with It even works with dotnet global tools, as they pack all rids and resolution is @ runtime (similar to fdd publish). Are you referring to the ability to edit the xml by the end user to fiddle with dll mapping in case an app is used on a new not yet supported os with a need for custom mapping? I'm having a little hard time seeing the use case for it under netcoreapp, (unlike net framework where it could help quite a lot), but to each their own I guess... |
@damageboy Have you even read the first comment? If native library has different name in different Unix/Linux distributions and that native library is part of OS and can't be added to that NuGet how would your solution even work with that? |
@damageboy consider packaging scenarios. I create an app and compile a self contained or native version for distribution. In this scenario the end user running the application isn't a developer and shouldn't be doing nuget package restores. If we package all the various flavours of native dependency (native.dll, native.so, native.dylib) and then have the program use what we've discussed above to determine the correct one to load then there is a single binary distribution with no complex setup to go wrong. |
@wanton7 yes I did read, and I'm glad you asked because the answer to that is a resounding yes: not only does it help, but also supersedes the In case of architecture specific folders you can essentially employ a bait and switch tactic where your code is compiled against a certain assembly and you get a different managed assembly at What But it doesn't just stop there... What happens if you need different
|
@Wraith2 unless I'm making a complete ass of myself, you are describing what I've just packaged one my own tools as a dotnet global tool inside our company, which has a complex dependency like that one you've just described (on liblzma.{dll,so,dylib} and that single packaged nupkg can be installed and run on windows/ubuntu right now (haven't testes osx personally, so I won't comment on it). |
@wanton7 also as a side note, I know you didn't mean it that way, but still: This is NOT MY solution, this is a solution that Microsoft did a great job of implementing and shipping in production at least since .NET Core 2.0. What they've done with less stellar success is documenting / advocating / talking about it as what both you and @Wraith2 think that cannot be done today without |
@damageboy architecture is not OS. You don't understand this problem at all. Let's say you create a game to Steam for Linux using .NET Core. Tell me how would your approach support all those different Linux distros from one install? This can't be done properly at build time, just can't. |
@damageboy let me still try to break this to you little bit. With example you are describing with liblzma.{dll,so,dylib} is situation where you as a developer are in control of names for those libraries. It's very different situation, actual problem is that you can't include these native libraries in nuget package because they are part of Operating System and you can't control their names. Their names could be different in different OS distro/version like liblzma_6.so liblzma_7.so and so on. |
@wanton7 so I must be a magician since I managed to support the exact use case you described with the current tools My internal compression nupkg (sorry, this is not public yet) achieves exactly this, here's the unzipped directory listing of its nupkg:
Note the different versions of When applications consuming THIS nupkg are published with I personally use this with packaging dotnet global tools and doing SCD styled deployments ( |
@damageboy yes you must be a magician because having to build own assembly for every supported OS sounds magical. |
@wanton7 The reason I writing all of this is that if you scroll back to my first post, I actually ended it with:
Note the use of a question mark which implies I understand the context of where I am and am actually trying to learn myself what this specific feature offers to netcore developers. I don't feel I've heard a compelling answer to my, and I'm not trying to dissuade anyone, especially not the creator of mono out of anything, I am actually curious to find our the answer to my question |
I think I see the disconnect. You’re taking about making an OS-agnostic nuget package that can be consumed by any OS-specific app and it’ll work. We’re talking about making an OS-agnostic app.
|
@jherby2k maybe that is the disconnect, but I am using this technique to publish OS agnostic apps in the form of:
It's true that the arch specific nuget approach doesn't work for a single assembly that is both the app and the assembly doing the p/invoke... then again, if you separate the p/invokes into its own assembly, everything does fall into place. Is this what Note that I do understand the magic that is |
@damageboy I explained and showed the scenario in the original issue. But some additional detail:
Again, by no means extensive, just a harsh reminder that there are workarounds available which are painful to use (again, mentioned in the original issue, the GRPC library that achieves this). |
This scenario was addressed by NativeLibrary APIs added in .NET Core 3.0. |
While Mono has
<dllmap>
and there is a long-standing discussion going on in https://github.com/dotnet/coreclr/issues/930 and efforts like https://github.com/dotnet/corefx/issues/17135 to work around the limitations of P/Invoke and evenNativeLibrary
was introduced, I feel that we could come up with a simple solution that leverages the existing P/Invoke capabilities, without the overhead and ugly machinery that comes from variations ofGetDelegateForFunctionPointer
and similar hacks that have been attempted to work around the limitations of P/Invoke.The proposal is to add an API to inform the runtime how we where we want a particular file referenced in the
DllImport
attribute to be loaded from. Developers would then annotate theirDllImport
attributes with a custom name, and at startup, their own logic would determine which library to load.For example:
Working around this today requires ugly hacks, from Gui.cs having doubled definitions, to Grpc generating proxies and entirer class hierarchies and tons of delegates to achieve the desired effect. And the result produces more junk than the current P/Invoke does.
Bonus points:
we could make it so that the string passed to DllImport could have parameters, similar in spirit to say a ConnectionString in SQL, so we could provide defaults, or even simple inline switching that is evaluated at resolution time.For example:
While I can certainly add a bag of hacks to gui.cs (for forked, different naming versions) and TensorFlowSharp (for CPU vs GPU, vs various SIMD operation builds), none of those libraries are particularly affected by the transition speed. But it would be a shame if we did not implement something for every other user that needs to cope with different bits of native code, but does not want to pay the performance price of the
GetDelegateFrom...
The text was updated successfully, but these errors were encountered: