-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core-services/prow/02_config: Drop AWS Boskos down to 155 leases #14832
core-services/prow/02_config: Drop AWS Boskos down to 155 leases #14832
Conversation
We bumped this from 150 total to 200 total in a9735b5 (Revert "Revert "core-services/prow/02_config/_boskos: Shard AWS, Azure, and GCP by region"", 2020-12-10, openshift#14262). But recently we have been hitting: level=error msg=Error: Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached. level=error msg= status code: 400, request id: ... in CI. The cause seems to be VPCs leaking out of CI jobs. And the cause of those leaks seems to be stuck teardowns, for example [1]: Deprovision failed on the following clusters: ci-op-3kfz2j4c ci-op-b1qptwsl ci-op-btpryb6k ... And the cause of those seems to be AWS throttling making it take a long time to list IAM roles [2]: time="2021-01-13T19:09:34Z" level=debug msg="search for IAM roles" time="2021-01-13T19:16:35Z" level=debug msg="search for IAM users" and thereafter not having enough time to actually clean up the cluster resources before we time out our teardown attempts. By reducing the overall capacity to 155, near our previous 150, we will hopefully reduce AWS IAM API traffic sufficiently to get back under AWS's undocumented throttling cap. I'm weighting us-east-1 more heavily, because the current VPC limits are 150 for us-east-1, and 55 for our other three AWS regions. I haven't looked into the other AWS limits vs. our expected consumption recently, so still no attempt at rational limits. And if the limits are really "undocumented AWS throttling", maybe rational limits for AWS are not possible. [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ipi-deprovision/1349426249335836672#1:build-log.txt%3A1385 [2]: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ipi-deprovision/1349426249335836672/artifacts/deprovision/ci-op-btpryb6k/.openshift_install.log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dobbymoodge, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@wking: Updated the following 2 configmaps:
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We bumped this from 150 total to 200 total in a9735b5 (#14262). But recently we have been hitting:
in CI. The cause seems to be VPCs leaking out of CI jobs. And the cause of those leaks seems to be stuck teardowns, for example:
And the cause of those seems to be AWS throttling making it take a long time to list IAM roles:
and thereafter not having enough time to actually clean up the cluster resources before we time out our teardown attempts. By reducing the overall capacity to 155, near our previous 150, we will hopefully reduce AWS IAM API traffic sufficiently to get back under AWS's undocumented throttling cap.
I'm weighting us-east-1 more heavily, because the current VPC limits are 150 for us-east-1, and 55 for our other three AWS regions. I haven't looked into the other AWS limits vs. our expected consumption recently, so still no attempt at rational limits. And if the limits are really "undocumented AWS throttling", maybe rational limits for AWS are not possible.