-
Notifications
You must be signed in to change notification settings - Fork 837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address MLServer flakiness in CI tests #3754
Conversation
/test integration |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Seems both mlflow v1 and mlflow v2 tests failed |
/test integration |
1 similar comment
/test integration |
/test integration |
/test integration |
The integrations errors seem to be coming now from the rolling updates tests, which means the MLflow and V2 tests haven't failed in this run! |
/test integration |
/retest |
1 similar comment
/retest |
@adriangonz: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here. |
@axsaucedo it seems that the integration errors are now mainly coming from the rolling updates tests. Haven't seen any MLflow / V2 errors on the last 4 runs, so I think this one is ready to go. |
* Update MLServer probes to match the executor's * Ensure liveness and readiness probes always use the v2 protocol paths * Retry for 503 and 504 as well * Remove Conda cache before using Conda again * Dummy change to trigger re-build of pre-packaged servers
What this PR does / why we need it:
Ensure that the readiness and liveness probes of the MLServer containers are always set as HTTP probes querying the health endpoints of the V2 protocol. Previously, this was being overriden to a plain TCP ping. Hopefully, this should address the flakiness that's been appearing recently on the tests leveraging MLServer containers.
Which issue(s) this PR fixes:
Fixes #3639
Special notes for your reviewer:
Does this PR introduce a user-facing change?: