Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TorchServe grpc server side streaming #2180

Closed
lxning opened this issue Mar 16, 2023 · 0 comments
Closed

TorchServe grpc server side streaming #2180

lxning opened this issue Mar 16, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@lxning
Copy link
Collaborator

lxning commented Mar 16, 2023

🚀 The feature

TorchServe grpc server side streaming supports

  • backend worker continuously send the intermediate prediction response to frontend
  • frontend grpc endpoint continuously send the intermediate prediction response from backend to client.

Motivation, pitch

Usually the predication latency is high (eg. 5sec) for a large model inference. Some models are able to generate intermediate prediction results (eg. generator AI). This feature will send the intermediate prediction results to user once the results are ready. The user will gradually get the entire response. For example, User may get first intermediate response within 1 sec, and gradually get the entire result until 5sec. This feature is to improve user prediction experience.

Alternatives

No response

Additional context

No response

@lxning lxning self-assigned this Mar 16, 2023
@lxning lxning added the enhancement New feature or request label Mar 16, 2023
@lxning lxning added this to the v0.8.0 milestone Mar 16, 2023
@lxning lxning mentioned this issue Mar 20, 2023
8 tasks
@lxning lxning closed this as completed Apr 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant