-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom model endpoints #96
Comments
Can you provide some examples to show the use cases? I'll add in @Maximophone who wrote a lot of the API and core engine that processes request/reponses in the runtime graph for comment. |
Yes it would be good to understand what you need the additional endpoint for. Our wrappers provide a simplified interface to Seldon but in your case you may want to write your own wrapper and implement directly our API. At the moment you can implement one of the 6 services defined in this file:
They all come with a specific set of methods, but Generic is a service that regroups all the possible methods and allows you to be more flexible. |
Yeah that might be the way to go. The use case right now is a MAB model (not router) with endpoints for managing the arms but I can see lots of cases where we might need custom APIs for our model's front end. So to clarify, we can pick one of those 6 services and implement it, similar to this. I was looking for a way to allow for that in the current build framework |
At the moment you can use the core-python-wrapper to create any of the first 4 (model, router, transformer and output transformer) by specifying the argument service-type. For example to create a router you would use the following: docker run seldonio/core-python-wrapper:0.7 ... --service-type=ROUTER Your python class then needs to implement the "route" method which takes as argument the features and returns the id of the child in the graph where the features will be sent next. It would be interesting for us to understand how your MAB differs from our implementation (see example with epsilon greedy here and here) |
I see that I could wrap a router but that still only provides two endpoints, |
I think one way to do this would be for you to simply expose ports from your containers and create services that talk to them, You could implement your own APIs for those ports. For python models you would need to see if you could share the Flask app we wrap to your model with new endpoints. With Ambassador you could expose those port to the outside world. This I think could work right now. The only thing to be careful of is that the operator attaches ports named "http" and "grpc" to each container image so you mustn't call your new ports by these names, see: Line 192 in 3ee8f63
We have not tried above. So ask me for more details and we'd be very interested in an example if you have it working. |
Right so we would need to share the flask app into the model class. That's why I was thinking maybe some method like |
I am thinking the simplest way is to extend model_microservice.py with your custom methods. Something along these lines: def get_rest_microservice(user_model,debug=False):
app = Flask(__name__)
@app.errorhandler(SeldonMicroserviceException)
def handle_invalid_usage(error):
response = jsonify(error.to_dict())
print("ERROR:")
print(error.to_dict())
response.status_code = 400
return response
@app.route("/predict",methods=["GET","POST"])
def Predict():
request = extract_message()
sanity_check_request(request)
datadef = request.get("data")
features = rest_datadef_to_array(datadef)
predictions = np.array(predict(user_model,features,datadef.get("names")))
# TODO: check that predictions is 2 dimensional
class_names = get_class_names(user_model, predictions.shape[1])
data = array_to_rest_datadef(predictions, class_names, datadef)
return jsonify({"data":data})
@app.route("/custom-method",methods=["GET","POST"])
def CustomMethod():
request = extract_message()
# YOUR CODE HERE
return app Is this what you meant? Another way is to write your own file custom_model_microservice.py by following the template of model_microservice.py and reference it in microservice.py as a possible choice for service-type |
That is what I mean but that would require modifying
Then in the user model definition you could have:
I know that this can be done using UPDATE: I tried this, the code above is updated but I get:
UPDATE 2: As I suspected this just can't be inside the class but it works if it's a function in that file.
And my
Not sure if you guys would be interested in supporting this. As it's written if the user doesn't define Also, I've been testing locally by just running the docker image, would this work at |
I think the first consideration is whether we want the custom method to be exposed to the external world via seldon-core and add the extra functionality to the engine and other components to handle this so custom API requests use the seldon /api endpoint and get routed through the running graph. My feeling is that we do not as that would add a lot of complex functionality to allow custom/dynamic arbitrary API endpoints. I think allowing a user on a per container basis to add extra endpoints which they can expose to the outside world themselves, say by adding endpoints to Ambassador or their own custom ingress might be valid. However, a final point is that for many use cases I think its better for auditing and versioning that the user doesn't have the ability to modify their running graph without updating the version and doing a rolling update. So, personally I would say any custom endpoints which change the model being run are not a good idea. Though I understand this gets blurred when you consider components like multi-armed bandits which are changing over the course of a particular model(s) being run. |
I think that would fit our needs, even if we don't give direct control to the model (which my answer above did not), how would you see that fitting into this framework? |
I think it would follow what you have done. Extend the wrappers to allow a user to add custom endpoints, Flask app for python. Then the question is how the container is exposed to the outside world. It could be that the SeldonDeployment CRD is extended to allow individual model components have their APIs exposed via Ambassador if set for each component. Then say the model would be available at: http:<Ambassafor_IP>://<deployment_name>/<model_name> So a new setting in the graph defn expose:true Need to discuss with @Maximophone |
Custom endpoints is now available plus further discussion on how best to allow external access to these in #319 |
Hello everyone, |
There is not direct support for this. The easiest if you use the python wrapper would be to extend that yourself. |
Hey @MattiaGallegati , There's some early support for this feature in MLServer. It's not a documented feature yet, as it has some caveats (e.g. multi-model serving is not supported), but you can see the current implementation of the MLflow runtime as an example: If you can have a look at the above, it would be great to get your thoughts! |
Hi @adriangonz , It can be interesting, I will do some tries on that. Are you planning to reconcile V2 and V1 features in some way in a near feature? |
Hey @MattiaGallegati , That's correct. The plan is that V2 supersedes completely the V1 protocol (and the V1 wrapper). So all things currently missing in V2 are already part of the roadmap. |
We have come across the use case of adding, for example, additional endpoints to a model besides
predict
andsend-feedback
. I'm still thinking of the best way to implement this into the framework.Perhaps within the model class we could have an
extend_microservice
method which gets called withapp
from insideMODEL_microservice.py
so that we could add our own Flask endpoints?The text was updated successfully, but these errors were encountered: