4705f26 (core-services/prow/02_config: Drop GCP Boskos leases to
80, 2020-12-02, openshift#14032) lowered from 120 to 80 to stay under the
policy-member quota. But we're still seeing some rate-limiting at 80:
$ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | grep -o 'googleapi: .*' | sort | uniq -c | sort -n | tail -n5
9 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded",
14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded
14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded"
32 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded"
276 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded
Digging into the ListGroups:
$ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | sed -n 's/.*Error: \(.*\): googleapi: .*/\1/p' | sort | uniq -c | so
rt -n | tail -n5
1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-b\"
1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-c\"
1 Error when reading or editing Target Pool "ci-op-tc4f5483-23c6b-cxw97-api"
1 Error when reading or editing Target Pool "ci-op-vvc0yx0q-822a1-pxgqq-api"
160 Error reading InstanceGroup Members
I haven't dug in to find the difference between 160 and 276+32, but
clearly the InstanceGroup list is a key player, and reducing that
failure mode is going to be at least one of:
a. Track down the noisy requestor and calm them down.
b. Make the cluster components more robust in the face of provider
throttling.
c. Lower the Boskos quota so we don't have so many noisy clusters
competing for limited ListGroup quota.
(a) and (b) are better, but (c) is easy, so we're going with (c) in
this commit as a temporary stopgap.