Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to call other methods rather "forward"? #5209

Closed
davidmartinrius opened this issue Dec 30, 2022 · 7 comments
Closed

Is there a way to call other methods rather "forward"? #5209

davidmartinrius opened this issue Dec 30, 2022 · 7 comments
Labels
enhancement New feature or request

Comments

@davidmartinrius
Copy link

davidmartinrius commented Dec 30, 2022

Hello!

I am currently working with torchscript. I exported weights from .pth to .pt from a model. I also exported the model methods with the decorator @torch.jit.export

The thing is that I did not find any explanation in the documentation about how to call other methods that are not "forward" method. I want to use forward and also use other methods from the model. Also, I don't want to split the model into multi model to only use forward methods of each model.

Please, if exist a way to do that, could you tell me how to do that?

In case it is not implemented yet, I assume you have a roadmap and maybe this is not a priority. But if you give me a hint how to implement it I could create a pull request.

By the way, I am currently working with Triton Server 22.08-py3

I already read this ticket #4513 but also did not find any solution at all.

Thank you,

David Martin Rius

@krishung5
Copy link
Contributor

@Tabrizian Is there a common ask for this feature request?

@jbkyang-nvi
Copy link
Contributor

Hi @davidmartinrius we do have a common ask(although not in the roadmap right now). @tanmayv25 should we add an enhancement label here?

I think implementing this would mean changing how all the backends behave, right?
Alternatively, have you tried using the python backend instead? You can write your own execute function which means you don't have to call forward()

@Tabrizian
Copy link
Member

Tabrizian commented Feb 1, 2023

I think this feature request makes sense. @jbkyang-nvi can you file a ticket for this?

I think implementing this would mean changing how all the backends behave, right?

This wouldn't change how all backends would behave. It would be only be an option for the Pytorch backend to choose the function for the inference similar to TF sigdefs.

@Tabrizian Tabrizian added the enhancement New feature or request label Feb 1, 2023
@davidmartinrius
Copy link
Author

Thank you everyone for collaborating with my request 😄

I am sure this enhancement will help many other developers.

@tanmayv25
Copy link
Contributor

tanmayv25 commented Feb 3, 2023

@davidmartinrius If you are interested in contributing, then please look at the code here: https://github.com/triton-inference-server/pytorch_backend/blob/main/src/libtorch.cc#L1325
You might have to make this function name user configurable via backend parameters.

See how tensorflow backend does this for signature def here: https://github.com/triton-inference-server/tensorflow_backend/blob/main/src/tensorflow.cc#L1173

iceychris added a commit to iceychris/pytorch_backend that referenced this issue Jan 29, 2024
…ton-inference-server/server#5209)

Signed-off-by: Christian Bruckdorfer <12550267+iceychris@users.noreply.github.com>
iceychris added a commit to iceychris/pytorch_backend that referenced this issue Jan 29, 2024
tanmayv25 added a commit to triton-inference-server/pytorch_backend that referenced this issue May 3, 2024
tanmayv25 added a commit to triton-inference-server/pytorch_backend that referenced this issue May 4, 2024
@asamadiya
Copy link

asamadiya commented Jun 25, 2024

@tanmayv25 Could we re-open this as the changes were reverted? I'm also interested in TorchScript backend supporting method names as part of the inference request. It would be great if we support the same for TensorFlow backend.

@asamadiya
Copy link

@tanmayv25 This is less useful compared to accepting 'method name' as a runtime parameter during inference. This way, we can call multiple methods on the model such as "forward", "update_embeddings", "update_weights" on the same model instance. This shouldn't be fixed at model load time. Any pointers on how this feature can be implemented in Triton? This is very limiting as all the serving solutions for TF - TF Serving and TorchScript support this functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

6 participants