-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core-services/prow/02_config: Drop GCP Boskos limit to 70 #16256
core-services/prow/02_config: Drop GCP Boskos limit to 70 #16256
Conversation
4705f26 (core-services/prow/02_config: Drop GCP Boskos leases to 80, 2020-12-02, openshift#14032) lowered from 120 to 80 to stay under the policy-member quota. But we're still seeing some rate-limiting at 80: $ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | grep -o 'googleapi: .*' | sort | uniq -c | sort -n | tail -n5 9 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded", 14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded 14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded" 32 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded" 276 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded Digging into the ListGroups: $ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | sed -n 's/.*Error: \(.*\): googleapi: .*/\1/p' | sort | uniq -c | so rt -n | tail -n5 1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-b\" 1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-c\" 1 Error when reading or editing Target Pool "ci-op-tc4f5483-23c6b-cxw97-api" 1 Error when reading or editing Target Pool "ci-op-vvc0yx0q-822a1-pxgqq-api" 160 Error reading InstanceGroup Members I haven't dug in to find the difference between 160 and 276+32, but clearly the InstanceGroup list is a key player, and reducing that failure mode is going to be at least one of: a. Track down the noisy requestor and calm them down. b. Make the cluster components more robust in the face of provider throttling. c. Lower the Boskos quota so we don't have so many noisy clusters competing for limited ListGroup quota. (a) and (b) are better, but (c) is easy, so we're going with (c) in this commit as a temporary stopgap.
0f78e47
to
35b42ba
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alvaroaleman, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@wking: Updated the following 2 configmaps:
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
4705f26 (#14032) lowered from 120 to 80 to stay under the policy-member quota. But we're still seeing some rate-limiting at 80:
Digging into the ListGroups:
I haven't dug in to find the difference between 160 and 276+32, but clearly the InstanceGroup list is a key player, and reducing that failure mode is going to be at least one of:
a. Track down the noisy requestor and calm them down.
b. Make the cluster components more robust in the face of provider throttling.
c. Lower the Boskos quota so we don't have so many noisy clusters competing for limited ListGroup quota.
(a) and (b) are better, but (c) is easy, so we're going with (c) in this PR as a temporary stopgap.