-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support object count quota for custom resources #64201
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nikhita Assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
In action with kubectl: # create the object count resource quota
$ kubectl create quota test --hard=count/crontabs.example.com=2
resourcequota/test created
# the usage will not be reflected till the CRD is created
$ kubectl get quota test -oyaml
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: 2018-05-23T13:39:25Z
name: test1
namespace: default
resourceVersion: "319"
selfLink: /api/v1/namespaces/default/resourcequotas/test1
uid: b5826694-5e8e-11e8-8ec1-54e1ad6c2d05
spec:
hard:
count/crontabs.example.com: "2"
status:
hard:
count/crontabs.example.com: "2"
# create the CRD and wait for sometime (the crd should be available via discovery and the quota controller needs to sync)
$ kubectl create -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/crontabs.example.com created
# the quota now contains the usage
$ kubectl get quota test -oyaml
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: 2018-05-23T13:39:37Z
name: test
namespace: default
resourceVersion: "651"
selfLink: /api/v1/namespaces/default/resourcequotas/test
uid: bc8d1a09-5e8e-11e8-8ec1-54e1ad6c2d05
spec:
hard:
count/crontabs.example.com: "2"
status:
hard:
count/crontabs.example.com: "2"
used:
count/crontabs.example.com: "0"
# create custom resources
$ kubectl create -f cr1.yaml
crontab.example.com/my-new-cron-object-1 created
$ kubectl create -f cr2.yaml
crontab.example.com/my-new-cron-object-2 created
$ kubectl create -f cr3.yaml
Error from server (Forbidden): error when creating "cr3.yaml": crontabs.example.com "my-new-cron-object-3" is forbidden: exceeded quota: test, requested: count/crontabs.example.com=1, used: count/crontabs.example.com=2, limited: count/crontabs.example.com=2 |
/sig api-machinery |
0756a8e
to
3b592a8
Compare
The resource quota controller now uses a dynamic client and a rest mapper to understand custom resources created via CRDs and aggregated apiservers. It now uses shared informers. It uses the existing known informer for known resources and creates a new shared informer for custom resources. To calculate the usage in the quota, it needs to create a new object count evaluator for the custom resource. A evaluator contains a lister. So a new lister is created (by creating an indexer) for the custom resource. This lister is then used to create the evaluator. Please note that the resource quota controller syncs every 30 seconds, so it might take some time for the quota status to be updated after the CRD is created. Until the quota status is updated and synced, we cannot create any custom resources.
// like those created via CRDs or aggregated apiservers. | ||
DynamicClient dynamic.Interface | ||
// RESTMapper can reset itself from discovery. | ||
RESTMapper resettableRESTMapper |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the underlying impl for this threadsafe? We use a generally dynamic restmapper in the controller context, but I don't know if it is designed to tolerate concurrent resets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the underlying impl for this threadsafe?
Yes.
kubernetes/staging/src/k8s.io/client-go/restmapper/discovery.go
Lines 210 to 220 in 28f171b
// Reset resets the internally cached Discovery information and will | |
// cause the next mapping request to re-discover. | |
func (d *DeferredDiscoveryRESTMapper) Reset() { | |
glog.V(5).Info("Invalidating discovery information") | |
d.initMu.Lock() | |
defer d.initMu.Unlock() | |
d.cl.Invalidate() | |
d.delegate = nil | |
} |
replenishmentFunc: rq.replenishQuota, | ||
registry: rq.registry, | ||
informersStarted: options.InformersStarted, | ||
ignoredResources: options.IgnoredResourcesFunc(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This becomes interesting.
// TODO we either find a way to discover this or find a way to provide it via config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created #64310 to allow it to be provided via config (same as GC)
// Resetting the REST mapper will also invalidate the underlying discovery | ||
// client. This is a leaky abstraction and assumes behavior about the REST | ||
// mapper, but we'll deal with it for now. | ||
rq.restMapper.Reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we passed a restmapper that was already being periodically refreshed. Why do we need to punch this out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we do, outside of the consuming controllers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
already being periodically refreshed
The Sync
is supposed to periodically refresh the restmapper with new discovery information.
The reset occurs within this Sync
function.
newResources, err := GetQuotableResources(discoveryFunc) | ||
if err != nil { | ||
utilruntime.HandleError(err) | ||
newResources := GetQuotableResources(discoveryClient) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it seems like you really want to flush your discovery cache and it doesn't have to do with the restmapper
} | ||
|
||
// ServeHTTP logs the action that occurred and always returns the associated status code | ||
func (f *fakeActionHandler) ServeHTTP(response http.ResponseWriter, request *http.Request) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a lot of mock I didn't expect to see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the listWatch test doesn't add anything here. That certainly can go, removing some of the mock code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the listWatch test, along with some others. Weren't really adding too much. This also removed most of the mock code.
Please see this commit: 9247d03. If removing those tests seems fine, I'll squash.
cc @yliaog |
cc @mbohlool |
} | ||
} | ||
|
||
type fakeServerResources struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fakeDiscoveryClient
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed the mock code from the quota test but fixed this in the GC test
|
||
// TestQuotaControllerSync ensures that a discovery client error | ||
// will not cause the quota controller to block infinitely. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not: no new line
@nikhita: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
For the record: we are not going to target this for 1.11. Since this ends up creating a new informer and lister for custom resources, we are increasing the number of times we cache resources (GC does the same as well). This is solvable if we have something like dynamic informers and listers. We will first focus on creating them and then refocus on the quota and GC controller. See #64310 (comment) and #64319 for more details. |
@nikhita: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/cc @yliaog |
I'm going to close this PR and reopen a proper fix once @p0lyn0mial's PR (#69308) gets in. |
@nikhita done :) |
Is this feature now supported. Where i want to limit number of instance of a CRD created in a namespace like number of knative services created in a namespace ? |
Support is merged in master and will be part of 1.15 |
@liggitt can you point me to sample/doc about how to use it for CRD objects |
the docs at https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota are accurate in 1.15+, to quota a custom resource "widgets" in the "example.com" API group, you would use |
opened doc PR for the 1.15 branch in kubernetes/website#14492 |
Fixes #53777
Fixes #59826
Draws a lot of inspiration from #47665
The resource quota controller now uses a dynamic client and a rest mapper to understand custom resources created via CRDs and aggregated apiservers.
To calculate the quota usage, the quota controller needs informers and listers.
Please note that the resource quota controller syncs every 30 seconds, so it might take some time for the quota status to be updated after the CRD is created. Until the quota status is updated and synced, we cannot create any custom resources.
Also, the custom resources created need to be namespaced to work with the quota controller.
Release note: