Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address MLServer flakiness in CI tests #3754

Merged
merged 5 commits into from
Dec 3, 2021

Conversation

adriangonz
Copy link
Contributor

What this PR does / why we need it:

Ensure that the readiness and liveness probes of the MLServer containers are always set as HTTP probes querying the health endpoints of the V2 protocol. Previously, this was being overriden to a plain TCP ping. Hopefully, this should address the flakiness that's been appearing recently on the tests leveraging MLServer containers.

Which issue(s) this PR fixes:

Fixes #3639

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@adriangonz
Copy link
Contributor Author

/test integration

@seldondev
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign adriangonz
You can assign the PR to them by writing /assign @adriangonz in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@axsaucedo
Copy link
Contributor

Seems both mlflow v1 and mlflow v2 tests failed
/test integration

@adriangonz
Copy link
Contributor Author

/test integration

1 similar comment
@adriangonz
Copy link
Contributor Author

/test integration

@adriangonz
Copy link
Contributor Author

/test integration

@adriangonz
Copy link
Contributor Author

/test integration

@adriangonz
Copy link
Contributor Author

The integrations errors seem to be coming now from the rolling updates tests, which means the MLflow and V2 tests haven't failed in this run!

@adriangonz
Copy link
Contributor Author

/test integration

@adriangonz
Copy link
Contributor Author

/retest

1 similar comment
@adriangonz
Copy link
Contributor Author

/retest

@seldondev
Copy link
Collaborator

@adriangonz: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
integration 7f06dc2 link /test integration

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here.

@adriangonz
Copy link
Contributor Author

@axsaucedo it seems that the integration errors are now mainly coming from the rolling updates tests. Haven't seen any MLflow / V2 errors on the last 4 runs, so I think this one is ready to go.

@adriangonz adriangonz requested a review from axsaucedo December 1, 2021 09:29
@adriangonz adriangonz marked this pull request as ready for review December 1, 2021 09:29
@axsaucedo axsaucedo merged commit bd0d4fe into SeldonIO:master Dec 3, 2021
stephen37 pushed a commit to stephen37/seldon-core that referenced this pull request Dec 21, 2021
* Update MLServer probes to match the executor's

* Ensure liveness and readiness probes always use the v2 protocol paths

* Retry for 503 and 504 as well

* Remove Conda cache before using Conda again

* Dummy change to trigger re-build of pre-packaged servers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Validate and address recent apparent flakiness in v2 server tests
3 participants