Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confirm occupied IP ranges are released in Cleanup of GKE cluster #485

Closed
1 of 2 tasks
randmonkey opened this issue Jan 5, 2023 · 0 comments · Fixed by #491
Closed
1 of 2 tasks

Confirm occupied IP ranges are released in Cleanup of GKE cluster #485

randmonkey opened this issue Jan 5, 2023 · 0 comments · Fixed by #491
Labels
area/feature New feature or request

Comments

@randmonkey
Copy link
Contributor

randmonkey commented Jan 5, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Problem Statement

As described in Kong/kubernetes-ingress-controller#3326, gcloud limited number ofIP ranges in a subnet. To prevent failures caused by the limit of IP range, we should confirm that occupied ranges is released after the Cleanup of cluster is finished,

Proposed Solution

  • Verify whether occupied IP ranges is released after successfully cleaned up the cluster in GKE
  • If not guaranteed that occupied IP ranges is released after cleanup of cluster, we add poll and wait step in cluster cleanup after calling API to delete cluster.

Additional information

No response

Acceptance Criteria

  • For GKE cluster, IP ranges (and possibly other resource with limitations) are released after clean up of cluester succeeded.
@randmonkey randmonkey added the area/feature New feature or request label Jan 5, 2023
@randmonkey randmonkey changed the title Confirm occupied subnet resource is released in Cleanup of GKE cluster Confirm occupied IP ranges are released in Cleanup of GKE cluster Jan 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant