-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - AWS kubernetes resources not fully deleting properly (security group created by eks) #1110
Comments
I'm going to push this issue into 0.4.1 or later. I'll explain the rational. Currently the aws vpc does not cleanly delete with
So this issue is a pain with no great solution on how to properly cleanup without AWS fixing this issue. Realistically this should not cause any problems aside from a stray vpc existing (no additional cost). If you want to delete the vpc simply go to the console and delete the vpc it should delete with it saying warning there is a security group still attached. |
Sounds like a plan to me.
On Thu, Feb 24, 2022 at 17:23 Christopher Ostrouchov < ***@***.***> wrote:
I'm going to push this issue into 0.4.1 or later. I'll explain the
rational. Currently the aws vpc does not cleanly delete with qhub destroy.
There are two reasons for this.
- when you delete the eks cluster any existing load balancers are not
cleaned up hashicorp/terraform-provider-aws#21863
<hashicorp/terraform-provider-aws#21863>. We
have solved this by running all the other stages and cleaning up the
kubernetes service that was a load balancer. So this one is solved ... but
really eks should be cleaning up after itself!
- when you delete the eks cluster there is a stray security group that
was associated with the eks cluster. Believe it is related to
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster#cluster_security_group_id.
Related issues terraform-aws-modules/terraform-aws-eks#1606
<terraform-aws-modules/terraform-aws-eks#1606>
So this issue is a pain with no great solution on how to properly cleanup
without AWS fixing this issue. Realistically this should not cause any
problems aside from a stray vpc existing (no additional cost). If you want
to delete the vpc simply go to the console and delete the vpc it should
delete with it saying warning there is a security group still attached.
—
Reply to this email directly, view it on GitHub
<#1110 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABBB6NNUA6CSYS52A67DULU42VWHANCNFSM5PFV33PQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
iPhone’d
|
Hi @costrouc I haven't found this recently, but how odd it would be if we add an extra removal step during destroying to use boto to check if the most painful resources were deleted?
|
@costrouc @viniciusdc 👋 I found this thread by your link-back to the terraform-aws module issue I opened. You might want to have a look at this terraform mini module I released awhile back and have been using internally for a couple months. During the terraform destroy, the module removes these Load Balancers that are stuck because of stray ENIs (Which creates the block in deleting subnets and security groups): https://github.com/webdog/terraform-kubernetes-delete-eni At minimum, the shell script can be taken from the module, if the terraform module doesn't make sense to use. Cheers! |
Worth trying: https://github.com/gruntwork-io/cloud-nuke |
OS system and architecture in which you are running QHub
Linux
Expected behavior
All qhub resources should cleanly delete.
See https://github.com/Quansight/qhub-integration-test/runs/5311056863?check_suite_focus=true#step:6:1250 for example. This is not needed for 0.4.0. But should be resolved in 0.4.1
Actual behavior
Resources do not all properly delete
How to Reproduce the problem?
Run qhub-integration-tests
Command output
No response
Versions and dependencies used.
No response
Compute environment
No response
Integrations
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: