You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Usually the predication latency is high (eg. 5sec) for a large model inference. Some models are able to generate intermediate prediction results (eg. generator AI). This feature will send the intermediate prediction results to user once the results are ready. The user will gradually get the entire response. For example, User may get first intermediate response within 1 sec, and gradually get the entire result until 5sec. This feature is to improve user prediction experience.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
is there a way to ping torchserve with the request_id to check the status whether it's completed or not? I don't have streaming inference but have long running requests(1min+) if there could be a way to check whether request processing ended or not would be nice.
🚀 The feature
TorchServe supports streaming response for both HTTP and GRPC endpoint.
Motivation, pitch
Usually the predication latency is high (eg. 5sec) for a large model inference. Some models are able to generate intermediate prediction results (eg. generator AI). This feature will send the intermediate prediction results to user once the results are ready. The user will gradually get the entire response. For example, User may get first intermediate response within 1 sec, and gradually get the entire result until 5sec. This feature is to improve user prediction experience.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: