-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use kubectl for kube-proxy upgrader calls #5609
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5609 +/- ##
==========================================
+ Coverage 73.22% 73.30% +0.08%
==========================================
Files 447 449 +2
Lines 37029 37244 +215
==========================================
+ Hits 27113 27301 +188
- Misses 8291 8311 +20
- Partials 1625 1632 +7
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
2713785
to
ef77f76
Compare
@g-gaston: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
ef77f76
to
a8681e0
Compare
/hold while we run some manual tests |
return c.kubectl.Get(ctx, resourceType, kubeconfig, list) | ||
} | ||
|
||
// Create saves the object obj in the Kubernetes cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Create saves the object obj in the Kubernetes cluster. | |
// Create creates the object obj in the Kubernetes cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to keep save
since it avoids using the same word contained in the method name. If not, the description becomes a bit redundant.
return nil, err | ||
} | ||
return kubeadmControlPlane, nil | ||
} | ||
|
||
// CAPIKubeadmControlPlaneKey generates an ObjectKey for the CAPI Kubeadm control plane owned by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this method belong to clusterapi package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not a bad idea
fine if I take a follow up?
return apiObj.GetName() | ||
} | ||
|
||
func newNotFoundErrorForTypeAndName(resourceType, name string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this work for x.y.apiversion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. In fact that's the case where it works best
9ba81c7
to
e4900c6
Compare
Before the kube-proxy upgrader (aws#5345) all API calls to the management cluster came either from kubectl or clusterctl, which both happened to be run in a docker container in the admin machine. This was the first piece of code that introduced the use of Kubernetes Go client directly from the CLI binary. This means that if a user was relying on this internal implementation (explicit interface vs implicit interface), their system could break if it wasn't setup to give the CLI network connectivity to the kind cluster. This PR "reverts" the addition of that new paradigm byt changing the underlying client implementation to use kubectl commands.
e4900c6
to
ffc3ec0
Compare
/unhold |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: g-gaston The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick release-0.15 |
@g-gaston: new pull request created: #5631 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@g-gaston: new pull request could not be created: failed to create pull request against aws/eks-anywhere#release-0.15 from head eks-distro-pr-bot:cherry-pick-5609-to-release-0.15: status code 422 not one of [201], body: {"message":"Validation Failed","errors":[{"resource":"PullRequest","code":"custom","message":"A pull request already exists for eks-distro-pr-bot:cherry-pick-5609-to-release-0.15."}],"documentation_url":"https://docs.github.com/rest/reference/pulls#create-a-pull-request"} In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1 similar comment
@g-gaston: new pull request could not be created: failed to create pull request against aws/eks-anywhere#release-0.15 from head eks-distro-pr-bot:cherry-pick-5609-to-release-0.15: status code 422 not one of [201], body: {"message":"Validation Failed","errors":[{"resource":"PullRequest","code":"custom","message":"A pull request already exists for eks-distro-pr-bot:cherry-pick-5609-to-release-0.15."}],"documentation_url":"https://docs.github.com/rest/reference/pulls#create-a-pull-request"} In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Before the kube-proxy upgrader (aws#5345) all API calls to the management cluster came either from kubectl or clusterctl, which both happened to be run in a docker container in the admin machine. This was the first piece of code that introduced the use of Kubernetes Go client directly from the CLI binary. This means that if a user was relying on this internal implementation (explicit interface vs implicit interface), their system could break if it wasn't setup to give the CLI network connectivity to the kind cluster. This PR "reverts" the addition of that new paradigm byt changing the underlying client implementation to use kubectl commands.
Description of changes
Before the kube-proxy upgrader (#5345) all API calls to the management cluster came either from kubectl or clusterctl, which both happened to be run in a docker container in the admin machine. This was the first piece of code that introduced the use of Kubernetes Go client directly from the CLI binary. This means that if a user was relying on this internal implementation (explicit interface vs implicit interface), their system could break if it wasn't setup to give the CLI network connectivity to the kind cluster.
This PR "reverts" the addition of that new paradigm by changing the underlying client implementation to use kubectl commands.
Testing
Ran
TestCloudStackKubernetes123ManagementClusterUpgradeFromLatest
manually upgrading fromv0.14.6
to make sure the kube-proxy upgrader still works as expected. (in progress)By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.