-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding the capability of writing custom endpoints #4093
Comments
I have an identical use case to solve. How have you solved this in your custom server? |
The simplest way is probably to extend the code we provide to add the endpoints you want. As @MattiaGallegati suggested there is also undocumented code in MLServer for custom endpoints that could be investigated. |
@MattiaGallegati Thank you for your feedback, I basically extended microservice.py file and added an extra endpoint for my work, however that required to copy my patch of Seldon core in the containers. The MLServer one looks interesting, I'll check that. Thank for suggesting the best practice, actually this is a research work and right now I'm mostly concerned to get it up and running and then try to apply best practices but I will definitely take a note of your comment for our later stages. If you are interested a related paper that is using the exact same on the fly model switching is Model-Switching: Dealing with Fluctuating Workloads in Machine-Learning-as-a-Service Systems. @cliveseldon Thank you! I'll try MLServer and I think I am already using the first solution you mentioned. |
At present we support only v2 protocol. |
Hello Seldon team,
As part of a special usecase I need to access my custom server from an endpoint. In other words, I want to add some custom logic to my custom Seldon server other than the existing one that could be triggered an endpoint like the existing ones (
POST /api/v1.0/predictions
orPOST /api/v1.0/feedback
). E.g. I want to change the loaded model from a custom endpoint likePOST /api/v1.0/switchmodel
(just as an example usecase). So far I have come up with a hacky way to do that via some hardcoding in my custom server. But, I think it would be nice to add the capability of adding custom endpoints with a predefined format to the CRDs that automatically generate the endpoints for the new function defined in the custom server to the Seldon core engine. Becuase I think there might be other cases in production systems that Seldon core users need to add they own functions to the server. I had a quick look at the microservice.py which is called at the entry point of custom servers and then the imported wrapper.py file and it seems the routes of both Kserve and Seldon protocal are defined in the wrapper fileI think the feature I mentioned should be some way to make Seldon cable add this routes dynamically (within an standard format) to the wrapper file from the CRDs and then the user defines a function in the custom python SDK. E.g.
and
change_mult
function would be accessible viaPOST /api/v1.0/change_mult
The text was updated successfully, but these errors were encountered: