-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update e2e tests to use k8s v1.22.1 #1588
Conversation
kept |
it looks like Windows deployments are failing consistently in CI /cc @jsturtevant |
Looks like the pod isn't starting up correctly. Kubelet reports running but logs for kubelet are missing. I can take a look tomorrow but will likely need to spin it up locally to diagnose further. |
The faliure messages was in the log files (https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api-provider-azure/1588/pull-cluster-api-provider-azure-e2e-windows/1423694792830750720/artifacts/clusters/capz-e2e-jnjsf2-win-ha/kube-system/kube-flannel-ds-amd64-626xq/kube-flannel.log). Basically flannel was in a crash loop backoff.
In 1.22 alot of the APIs got upgraded. I found this and fixed it as part of #1388 with commit dcc9efa. Should I open an separate PR or do you want to include it in this one? |
/test pull-cluster-api-provider-azure-e2e |
1 similar comment
/test pull-cluster-api-provider-azure-e2e |
/retest |
investigating the failure, seems the private cluster bastion host get stuck or takes a long time and the test times out |
/test pull-cluster-api-provider-azure-e2e |
3 similar comments
/test pull-cluster-api-provider-azure-e2e |
/test pull-cluster-api-provider-azure-e2e |
/test pull-cluster-api-provider-azure-e2e |
I'll try the e2e again after the Calico PR merged /test pull-cluster-api-provider-azure-e2e |
@nader-ziada any chance we need newer CAPI changes to allow k8s 1.22 management clusters? We could try to update to CAPI v0.4.1 in case it helps |
its one specific test that keeps failing, but I can't repro it locally |
@nader-ziada could you try 1.21.4 to see if the same issue repros? 1.21.4 also has the fix for kubernetes-sigs/cloud-provider-azure#706 |
19ac7ee
to
ecf3eed
Compare
did that and rebased to get the update to capi v0.4.1 (which finally merged) |
/test pull-cluster-api-provider-azure-e2e-full |
@CecileRobertMichon all the tests passed in the last run except for the GPU one, which I don't think is because of the this change, but will try again now /test pull-cluster-api-provider-azure-e2e-full |
/test pull-cluster-api-provider-azure-capi-e2e |
test/e2e/config/azure-dev.yaml
Outdated
@@ -96,22 +96,22 @@ providers: | |||
targetName: "cluster-template-custom-vnet.yaml" | |||
|
|||
variables: | |||
KUBERNETES_VERSION: "${KUBERNETES_VERSION:-v1.21.2}" | |||
KUBERNETES_VERSION: "${KUBERNETES_VERSION:-v1.22.0}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1.22.1 is out now, should we rev all the 1.22.0 versions to 1.22.0?
update to |
@nader-ziada can you please update the PR title, release, and commit message to reflect 1.22.1? let's get this merged asap, a lot of other PRs are getting hit by LB flakes which will hopefully be fixed by this PR |
done with the updates for title, release notes, and commit message |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: CecileRobertMichon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
# using a different version for windows because of an issue on azure cloud provider | ||
# that only affects windows and external load balancer | ||
# https://github.com/kubernetes-sigs/cloud-provider-azure/issues/706 | ||
WINDOWS_KUBERNETES_VERSION: "${WINDOWS_KUBERNETES_VERSION:-v1.19.11}" | ||
WINDOWS_KUBERNETES_VERSION: "${WINDOWS_KUBERNETES_VERSION:-v1.22.1}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can remove this now, let's follow up on that after this merges though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll do a followup PR after this is done
@nader-ziada: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
VMSS flake :sigh: which is NOT an ELB flake, so that's good /retest |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
Release note: