-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow non-model specific predict for Tensorflow protocol #1684
Conversation
Sun Apr 12 10:00:15 UTC 2020 impatient try |
Sun Apr 12 10:00:19 UTC 2020 impatient try |
Just checking I've understood the change at https://github.com/SeldonIO/seldon-core/pull/1684/files#diff-a047655e38112373418dfdc7e8b71227R292 correctly. It means the model_name parameter will be optional? So if it's not set then you can still make a request to the graph-level endpoint by hitting |
Yes the model_name is needed just for Seldon protocol as it needs to run the tensorflow proxy server and in that it needs to know the model_name. For tensorflow protocol the name of the graph component will need to match the model loaded as that will be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ryandawsonuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Tue Apr 14 08:33:32 UTC 2020 impatient try |
Tue Apr 14 08:33:39 UTC 2020 impatient try |
failed to trigger Pull Request pipeline
|
Fixes #1611
/v1/models/:predict