-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix missing resource version when updating the scale subresource of custom resource #80572
Conversation
Hi @knight42. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @apelisse |
I don't think that we should be doing a GET before the request. Scale is an imperative command and unless you specify the resource-version, it should just be executed as specified. This comment from #80515 in particular lets me think that there is something wrong:
I'd like to understand why that is the case, I suspect it may be a bug/behavior we don't understand in the CRD handler. |
@apelisse It makes sense, I' ll try to deep dive into it. |
@apelisse I thought I have found out the root cause. When updating an object, apiserver will check if the object could be updated unconditionally: If not, the validation will fail: And custom resource could not be updated unconditionally now: @apelisse Do you have a clue why custom resource could not do an unconditional update? |
Isn't this object supposed to handle scale resources for CRs: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go#L202 ? |
@apelisse Yeah it is, but the problem is still related to https://github.com/kubernetes/apiserver/blob/781c3cd1b3dc5b6f79c68ab0d16fe544600421ef/pkg/registry/generic/registry/store.go#L531-L537. Here is how it goes: So I guess the fix may be simply removing this line: kubernetes/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go Line 276 in c1d2ac4
|
@apelisse PTAL |
/assign @lavalamp |
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
Show resolved
Hide resolved
one comment update, and one additional test, then this LGTM |
61d07f1
to
af755f2
Compare
@liggitt I found that the "retry on conflicts" mechanism may not be optimal. As shown in https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/80572/pull-kubernetes-bazel-test/1193803978668773382, in extreme cases, such as frequent concurrent patches, we may keep retrying until timeout then return the last error. So I decided to switch to patching the |
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
Outdated
Show resolved
Hide resolved
e231b22
to
73e46f7
Compare
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
Show resolved
Hide resolved
73e46f7
to
dc2d639
Compare
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
Outdated
Show resolved
Hide resolved
two comments on the test, then lgtm |
Signed-off-by: knight42 <anonymousknight96@gmail.com>
dc2d639
to
da24601
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: knight42, liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
Any chance for this to be backported to 1.15/1.16? @liggitt |
This is a more invasive change than is typically backported. Note that #81342 made it into 1.16 and modifies kubectl to use patch when scaling, which works with custom resources in 1.15/1.16 servers. |
@liggitt I see, thanks! |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Not to clear the resource version of custom resource when saving it to etcd.
Which issue(s) this PR fixes:
Fixes #80515
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: