We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to specify model version as shown here https://www.tensorflow.org/tfx/serving/api_rest#predict_api . From looking at the code, currently it doesn't seem possible: https://github.com/SeldonIO/seldon-core/blob/master/integrations/tfserving/TfServingProxy.py#L45
The text was updated successfully, but these errors were encountered:
If you use 1.1.0 of Seldon you can use the Tensorflow protocol directly. https://docs.seldon.io/projects/seldon-core/en/latest/graph/protocols.html#rest-and-grpc-tensorflow-protocol
Its true we don't expose the version path in REST
seldon-core/executor/api/rest/server.go
Line 150 in 33d66ae
In grpc you should be able to use it at present.
Sorry, something went wrong.
Thanks I will look into it.
Will close this assuming using Tensoflow protocol is the best solution as suggested. Please reopen if still an issue.
No branches or pull requests
Allow to specify model version as shown here https://www.tensorflow.org/tfx/serving/api_rest#predict_api . From looking at the code, currently it doesn't seem possible: https://github.com/SeldonIO/seldon-core/blob/master/integrations/tfserving/TfServingProxy.py#L45
The text was updated successfully, but these errors were encountered: