Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core-services/prow/02_config: Drop GCP Boskos leases to 80 #14032

Merged
merged 1 commit into from
Dec 2, 2020

Conversation

wking
Copy link
Member

@wking wking commented Dec 2, 2020

We'd raised them from 80 to 120 in 1fb9779 (#10050), after (unspecified?) limit bumps. But recently we've been hitting:

level=error msg="Error: Request \"Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \\\"project \\\\\\\"openshift-gce-devel-ci\\\\\\\"\\\"\" returned error: Batch request and retried single request \"Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \\\"project \\\\\\\"openshift-gce-devel-ci\\\\\\\"\\\"\" both failed. Final error: Error applying IAM policy for project \"openshift-gce-devel-ci\": Error setting IAM policy for project \"openshift-gce-devel-ci\": googleapi: Error 400: The number of members in the policy (1,501) is larger than the maximum allowed size 1,500., badRequest"

@patrickdillon counts six installer-created bindings and 12 additional cloud-cred-operator-created bindings per cluster, which gives space for 83 clusters. Dropping the Boskos cap to 80 leaves 60 bindings free for long-lived IAM users (e.g. the user we use to create clusters and users associated with human admins). If, in the future, we transition more of our CI to passthrough-mode credentials (instead of the current mint-mode credentials), we would have space for more CI clusters under our current policy-member quota.

Generated by editing generate-boskos.py and then running:

$ hack/validate-boskos.sh

We'd raised them from 80 to 120 in 1fb9779 (GCP account is
provisioned for 150 networks, bump to 120 clusters, 2020-07-02, openshift#10050),
after (unspecified?) limit bumps.  But recently we've been hitting
[1]:

  level=error msg="Error: Request \"Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \\\"project \\\\\\\"openshift-gce-devel-ci\\\\\\\"\\\"\" returned error: Batch request and retried single request \"Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \\\"project \\\\\\\"openshift-gce-devel-ci\\\\\\\"\\\"\" both failed. Final error: Error applying IAM policy for project \"openshift-gce-devel-ci\": Error setting IAM policy for project \"openshift-gce-devel-ci\": googleapi: Error 400: The number of members in the policy (1,501) is larger than the maximum allowed size 1,500., badRequest"

Patrick counts six installer-created bindings and 12 additional
cloud-cred-operator-created bindings per cluster, which gives space
for 83 clusters.  Dropping the Boskos cap to 80 leaves 60 bindings
free for long-lived IAM users (e.g. the user we use to create clusters
and users associated with human admins).  If, in the future, we
transition more of our CI to passthrough-mode credentials (instead of
the current mint-mode credentials), we would have space for more CI
clusters under our current policy-member quota.

Generated by editing generate-boskos.py and then running:

  $ hack/validate-boskos.sh

[1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-launch-gcp/1333696073952137216
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 2, 2020
@stevekuznetsov
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 2, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: stevekuznetsov, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 77d3cfb into openshift:master Dec 2, 2020
@openshift-ci-robot
Copy link
Contributor

@wking: Updated the following 2 configmaps:

  • resources configmap in namespace ci at cluster api.ci using the following files:
    • key boskos.yaml using file core-services/prow/02_config/_boskos.yaml
  • resources configmap in namespace ci at cluster app.ci using the following files:
    • key boskos.yaml using file core-services/prow/02_config/_boskos.yaml

In response to this:

We'd raised them from 80 to 120 in 1fb9779 (#10050), after (unspecified?) limit bumps. But recently we've been hitting:

level=error msg="Error: Request "Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \"project \\\"openshift-gce-devel-ci\\\"\"" returned error: Batch request and retried single request "Create IAM Members roles/compute.viewer serviceAccount:ci-ln-4bw2v62-f76d1-685n4-w@openshift-gce-devel-ci.iam.gserviceaccount.com for \"project \\\"openshift-gce-devel-ci\\\"\"" both failed. Final error: Error applying IAM policy for project "openshift-gce-devel-ci": Error setting IAM policy for project "openshift-gce-devel-ci": googleapi: Error 400: The number of members in the policy (1,501) is larger than the maximum allowed size 1,500., badRequest"

@patrickdillon counts six installer-created bindings and 12 additional cloud-cred-operator-created bindings per cluster, which gives space for 83 clusters. Dropping the Boskos cap to 80 leaves 60 bindings free for long-lived IAM users (e.g. the user we use to create clusters and users associated with human admins). If, in the future, we transition more of our CI to passthrough-mode credentials (instead of the current mint-mode credentials), we would have space for more CI clusters under our current policy-member quota.

Generated by editing generate-boskos.py and then running:

$ hack/validate-boskos.sh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

wking added a commit to wking/openshift-release that referenced this pull request Dec 10, 2020
…e, and GCP by region""

This reverts commit 8a22fc4, openshift#12842.

Boskos' config reloading on dynamic -> static pivots has been fixed by
kubernetes-sigs/boskos@3834f37d8a (Config sync: Avoid deadlock when
static -> dynamic -> static, 2020-12-03, kubernetes-sigs/boskos#54),
so we can take another run at static leases for these platforms.  Not
a clean re-revert, because 4705f26 (core-services/prow/02_config:
Drop GCP Boskos leases to 80, 2020-12-02, openshift#14032) landed in the
meantime, but it was easy to update from 120 to 80 here.
wking added a commit to wking/openshift-release that referenced this pull request Feb 24, 2021
4705f26 (core-services/prow/02_config: Drop GCP Boskos leases to
80, 2020-12-02, openshift#14032) lowered from 120 to 80 to stay under the
policy-member quota.  But we're still seeing some rate-limiting at 80:

  $ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | grep -o 'googleapi: .*' | sort | uniq -c | sort -n | tail -n5
        9 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded",
       14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded
       14 googleapi: Error 403: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded"
       32 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded"
      276 googleapi: Error 403: Quota exceeded for quota group 'ListGroup' and limit 'List requests per 100 seconds' of service 'compute.googleapis.com' for consumer 'project_number:1053217076791'., rateLimitExceeded

Digging into the ListGroups:

  $ curl -s 'https://search.ci.openshift.org/search?maxAge=96h&search=googleapi:+Error+403:+Quota+exceeded' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | sed -n 's/.*Error: \(.*\): googleapi: .*/\1/p' | sort | uniq -c | so
  rt -n | tail -n5
        1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-b\"
        1 Error when reading or editing Instance Group \"ci-op-yjzzp382-918f6-gl5gh-master-us-east1-c\"
        1 Error when reading or editing Target Pool "ci-op-tc4f5483-23c6b-cxw97-api"
        1 Error when reading or editing Target Pool "ci-op-vvc0yx0q-822a1-pxgqq-api"
      160 Error reading InstanceGroup Members

I haven't dug in to find the difference between 160 and 276+32, but
clearly the InstanceGroup list is a key player, and reducing that
failure mode is going to be at least one of:

a. Track down the noisy requestor and calm them down.
b. Make the cluster components more robust in the face of provider
   throttling.
c. Lower the Boskos quota so we don't have so many noisy clusters
   competing for limited ListGroup quota.

(a) and (b) are better, but (c) is easy, so we're going with (c) in
this commit as a temporary stopgap.
@wking wking deleted the 80-gcp-boskos-cap branch March 29, 2021 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants