-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't delete IAM role because policies are attached #2761
Comments
I'm working on a fix for this and I should be able to fix #2818 with the same logic... I think. |
@graycoder thanks for volunteering! Are you still working on this? Apologies if you've already submitted it, I'm very behind 😦 |
I haven't gotten a chance to fully figure out how to do this yet. It seems like changing a user, role, or group name causes a deletion and recreation of that resource, but we'd need to trigger the detachment of the policy via that change. |
Is there any update on this one? It's a bit of a blocker to use TF in a production system if one must delete attached policies from roles before updating them. |
I'm also experiencing this problem. Guess I'd better be idiomatic with my resource names so I don't have to rename them for the time being. |
+1 |
Still experiencing this with |
See also the suggestion in this comment by @tamsky.. (Since #2957 is now closed) |
Same here. Any updates? |
also seeing this. |
It seems like there's a separate issue than just the renaming case - even trying to delete both a role and its policy attachment in the same run will sometimes fail if terraform decides to delete the role first. I don't quite understand this behavior, because my policy attachment references the role. During creation, terraform always uses the dependency graph and creates the thing being referred to (iam role) before creating the thing that depends on it (policy attachment). Why don't deletions use the same dependency graph logic, and delete things in reverse order of how they were created? (On top of this, to solve the specific case above it seems another feature would be needed to then "propagate recreations" for some dependency relationships such as policy attachment -> role during a role -/+). |
Are there any concrete plans to solve this and #2957 ? Without it it is not possible to create an immutable infra workflow that works completely within the terraform toolset, and I note both are unsolved after a year and a half. |
sadly, we were forced to stop using terraform, because it had too many
issues preventing us from creating immutable infrastructure scripts
On Wed, 1 Mar 2017, 12:04 Geoff Meakin, <notifications@github.com> wrote:
Are there any concrete plans to solve this and #2957
<#2957> ? Without it it is not
possible to create an immutable infra workflow that works completely within
the terraform toolset, and I note both are unsolved after a year and a half.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2761 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAC-6ts5jStzl94vEUDsnoWe7gMUdrzTks5rhVC6gaJpZM4FaTtP>
.
|
This is a big problem for us with aws_ebs_volume and aws_volume_attachment and has been long running with no updates in respect to resolution. Anyone from Hashicorp to give a definitive plan, this was raised in 2015. |
This issue is also causing me problem in the automatic maintenance process of the instance. 2 error(s) occurred:
The only workaround is to do it manually on the Amazon console. Do you have any plan to resolve this critical conflict? |
Faced the same issue with terraform version 0.9.0.
|
The same in 0.9.3 |
Try the newer resources: |
@pgray |
Hi folks, Secondly the root cause of the problem here is ordering of delete/create operation for the When a field is marked as The easiest and cleanest solution for now is just to add this block of code to the role: lifecycle {
create_before_destroy = true
} which reverses the ordering of delete/create operations and avoids the problem described in this issue altogether. https://www.terraform.io/docs/configuration/resources.html#lifecycle I think it would make sense in many cases (like here), to make this behaviour default one, but only under certain conditions - e.g. when recreation is triggered by a field like |
See this issue in Terraform: hashicorp/terraform#2761 [#145499709] Signed-off-by: Rowan Jacobs <rojacobs@pivotal.io> Signed-off-by: Genevieve LEsperance <glesperance@pivotal.io>
I already use this in all the resources lifecycle { My Solution: Manually run: and then remove the resources from my terraform.tfstate file Not a great solution, but allowed me to destroy and rebuild these resources without having to completely teardown the complete script. Would love to know if there is a more elegant solution. |
@jogster - at the very least, you can change your solution to use |
Thanks for the quick response, just tried that -- works a treat, I have added it to my script. I think this is workable for me now. |
I am seeing this too, if a role has an attached document and needs to get re-created, it'll error because terraform doesn't remove the attached policies. |
I am new to Terraform and getting this problem while destroying aws_volume_attachment.ebs_att: Error waiting for Volume (vol-06b32591d86b6d95e) to detach from Instance: i-04528452d805001c5 Any help will be appreciated. Thanks. |
@matikumra69 With |
The workaround in 2957 doesn't work for latest terraform anymore. I am at v0.10.7. |
There's a force_detach_policies argument on aws_iam_role, set it to
to get the arn of the offending attached policy, then:
then
and then |
Any update on when this will be resolved? From what i can see this has been an issue since 2015. How can it take this long to resolve such a critical issue. Workarounds are not a solution. Most workarounds don't even work. |
This issue has been automatically migrated to hashicorp/terraform-provider-aws#5417 because it looks like an issue with that provider. If you believe this is not an issue with the provider, please reply to hashicorp/terraform-provider-aws#5417. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I renamed a policy and when attempting to apply it, failure:
I don't think there is anything special about my configuration, but it looks something like:
The error occurred after changing the role name.
The text was updated successfully, but these errors were encountered: