-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to call other methods rather "forward"? #5209
Comments
@Tabrizian Is there a common ask for this feature request? |
Hi @davidmartinrius we do have a common ask(although not in the roadmap right now). @tanmayv25 should we add an enhancement label here? I think implementing this would mean changing how all the backends behave, right? |
I think this feature request makes sense. @jbkyang-nvi can you file a ticket for this?
This wouldn't change how all backends would behave. It would be only be an option for the Pytorch backend to choose the function for the inference similar to TF sigdefs. |
Thank you everyone for collaborating with my request 😄 I am sure this enhancement will help many other developers. |
@davidmartinrius If you are interested in contributing, then please look at the code here: https://github.com/triton-inference-server/pytorch_backend/blob/main/src/libtorch.cc#L1325 See how tensorflow backend does this for signature def here: https://github.com/triton-inference-server/tensorflow_backend/blob/main/src/tensorflow.cc#L1173 |
…ton-inference-server/server#5209) Signed-off-by: Christian Bruckdorfer <12550267+iceychris@users.noreply.github.com>
…ton-inference-server/server#5209) Signed-off-by: Christian Bruckdorfer <christiansvde@freenet.de>
…ixes triton-inference-server/server#5209) (#127)" This reverts commit 7b63f0f.
…ixes triton-inference-server/server#5209) (#127)" (#128) This reverts commit 7b63f0f.
@tanmayv25 Could we re-open this as the changes were reverted? I'm also interested in TorchScript backend supporting method names as part of the inference request. It would be great if we support the same for TensorFlow backend. |
@tanmayv25 This is less useful compared to accepting 'method name' as a runtime parameter during inference. This way, we can call multiple methods on the model such as "forward", "update_embeddings", "update_weights" on the same model instance. This shouldn't be fixed at model load time. Any pointers on how this feature can be implemented in Triton? This is very limiting as all the serving solutions for TF - TF Serving and TorchScript support this functionality. |
Hello!
I am currently working with torchscript. I exported weights from .pth to .pt from a model. I also exported the model methods with the decorator @torch.jit.export
The thing is that I did not find any explanation in the documentation about how to call other methods that are not "forward" method. I want to use forward and also use other methods from the model. Also, I don't want to split the model into multi model to only use forward methods of each model.
Please, if exist a way to do that, could you tell me how to do that?
In case it is not implemented yet, I assume you have a roadmap and maybe this is not a priority. But if you give me a hint how to implement it I could create a pull request.
By the way, I am currently working with Triton Server 22.08-py3
I already read this ticket #4513 but also did not find any solution at all.
Thank you,
David Martin Rius
The text was updated successfully, but these errors were encountered: