-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GRPC API for javascript models with Nodejs s2i wrapper #224
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The creation of the grpc components should be part of the nodejs wrapper so the user does not need to create them when wrapping.
Suggest you create a 0.2 of the nodejs wrapper that has the grpc js components. |
Are you suggesting to create a separate wrapper at Also the user doesn't need to create grpc components, only a grpc client to connect with the server created by the wrapper. I have tested it with the current |
Sorry my mistake. Ok, so we should still eventually have this as 0.2, as we can't change 0.1 now its officially published. If would be great to have update the mnist example to test using gRPC. You can look at https://github.com/SeldonIO/seldon-core/blob/master/examples/models/sklearn_iris/sklearn_iris.ipynb as an example that tests both REST and gRPC. |
Thanks for the clarification @cliveseldon. I have tested and added the grpc api test to both the javascript model examples using a tagged 0.2-SNAPSHOT nodejs wrapper image. Please review. |
@cliveseldon Also can you point me a model example in which the send_feedback API is implemented? I saw the python implementation but need more understanding about how the feedback ties back to other Seldon components like router and Seldon analytics. Any kind of documentation should be apposite. |
A router example would the the e-greedy MAB. The general internal API is defined here. Yes, we do need docs for helping people building routers. @jklaise |
kubernetes/minikube#2798 (comment) suggest a change of API for starting the minikube
Update kubectl_demo_minikube_rbac.ipynb
Fixed a typo introduced by editing directly the source of ipynb on last commit.
Update kubectl_demo_minikube_rbac.ipynb
@cliveseldon Thanks for sharing the links. My doubt was regarding the send feedback API implemented as a part of the model microservice in the python flask app. It seems that this is another function definition that a model developer could have in order to tap into the feedback cycle of the deployment using a router because it is not mentioned in the internal API doc for the model component. I think we can review the models' GRPC API implementation for now and next I shall take up the send feedback implementation for models as a separate request along with a router implementation. Let me know your thoughts. |
I think the "model" wrapper should implement the sendFeedback method as it is an optional method for a Model. We need documentation for this but the core code is: seldon-core/engine/src/main/java/io/seldon/engine/predictors/PredictorConfigBean.java Lines 42 to 71 in 0d589a9
So by default a model just implements transform-input which is converted to a /predict call for MODELs. But it can optionally implement send-feedback if it wishes. This would be key to reinforcement learning. If its easy to add I would. More importantly a full Wrapper needs to cover all Predictive Unit types so more importantly the NodeJS code needs to handle Routers, Transformers, Combiners etc to be a fully fledged Wrapper. |
Agreed. As you have mentioned the optional send_feedback end-point would be the key for reinforcement learning. I will create an implementation to accommodate the same in the wrapper, keeping the python implementation as an example. Further, I can work on covering all other Predictive Unit types step-by-step. Also, is there some kind of contract test for the send_feedback API available similar to the predict API? |
Hi @cliveseldon , I started the implementation of the GRPC SendFeedback API for the nodejs model component closely following the example of the python implementation. The python server adds the predict GRPC as a Model Service but to accomodate the Send Feedback API it should have used the Seldon Service instead (as mentioned in the prediction.proto file). Please clarify. Also, I have added the Transformer Predictive Unit functionality for the nodejs wrapper and included the same in the wrapper test script along with a transformer template in the test folder. Plus, the mnist example is updated to test using gRPC on the notebook. Please review the implementation. |
Update epsilon-greedy example to Python 3
Looks good. Small thing. The notebooks should use 0.2-SNAPSHOT throughout to ensure everything still works. |
Nvidia Inference Server and Tensorflow Serving Model Proxies
Fix for defaulting changing Custom Resource
This is done. The notebook examples are using the new 0.2-SNAPSHOT image now. |
@cliveseldon Please let me know if any more changes are required. Also, you may review and close this pull request. |
One thing I saw was that the Docker container running the model does not respond to SIGTERM signals. If possible it would be good to change this so it shutdowns on receiving a signal. |
@cliveseldon I have made updates so that the server now shuts down on receiving a SIGTERM or SIGINT signal. This has been done for both MODEL and TRANSFORMER predictive units with REST and GRPC APIs. |
Creating a pull request for the implementation of GRPC API in the Nodejs s2i wrapper for JavaScript models as per issue #216.
Updated the node js microservice and its requirements for the grpc api
Updated the test model-template-app encironment for the API_TYPE
Created a grpc_client.js to test the model-template-app API