You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I also would like to raise some issues raised before related to the above.
Will there be an improvement in how TorchSharp consume Torch model created in python. Currently we need an additional python script to create a TorchSharp compatible model in python. Is there a plan soon that this additional step will be removed, perhaps as part of the ML.NET integration planning process?
There are challenges dealing with ONNX, especially when dealing with NLP use cases (e.g. ref 1, ref 2). Is there a plan to consider incorporating ONNX save and load features in TorchSharp? By having Onnx save/load in TorchSharp, this provides an alternative path to the existing option to import ONNX to ML.NET
The text was updated successfully, but these errors were encountered:
cc @michaelgsharp@briacht
At the moment we're still trying to understand what this work would look like. Thanks for raising concerns and we'll make sure to stay engaged in the discussion.
TorchSharp integration to ML.NETis currently being planned (supervised by @ericstj ) as recently shared/discussed by @NiklasGustafsson
I suggest TorchSharp users provide feedback to the integration process here.
As part of the integration, the question how to consume exported Torch model by ML.NET has been planned.
I also would like to raise some issues raised before related to the above.
The text was updated successfully, but these errors were encountered: