Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible loss of resources #8241

Closed
BogdanSorlea opened this issue Aug 16, 2016 · 7 comments
Closed

Possible loss of resources #8241

BogdanSorlea opened this issue Aug 16, 2016 · 7 comments

Comments

@BogdanSorlea
Copy link

BogdanSorlea commented Aug 16, 2016

Version: 0.7

Unfortunately I can't post the state file for this example (sanitising it would take too much - if I get to run a smaller terraform setup that I can sanitise, then I will post it).

Basically, when I run terraform apply I get this message Apply complete! Resources: 1004 added, 0 changed, 0 destroyed.
Immediately after I run terraform destroy on the same setup and I get Apply complete! Resources: 0 added, 0 changed, 877 destroyed.

Note that I didn't run a terraform plan for either of the 2 runs - but 1004 is the right value (also according to terraform <= v0.6.16).

If I run terraform destroy a 2nd time, I get

Error creating plan: 411 error(s) occurred:

* variable "it-stack-node" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
* variable "core-db" is nil, but no error was reported
* variable "core-db" is nil, but no error was reported
[...]

about which I posted a previous (potential?) issue: #8229

Note: most of the resources I have are aws_instance, aws_security_group, aws_security_group_rule - and searching in the AWS console I don't seem to have any instances or SGs left-over, so it might be actually in a good state, but the message might be wrong/misleading.

@radeksimko
Copy link
Member

Hi @BogdanSorlea
thanks for the report. Full repro case would be very helpful in this case.

Did you use count to duplicate all the resources? If you cannot share the full config, would you mind sharing counts per each resource at least?

Also, there have been some changes in interpolation lately, have you tried upgrading to 0.7?

@BogdanSorlea
Copy link
Author

@radeksimko Hey, I forgot to mention when initially posting, but I amended the message mentioning that I am actually running v0.7.0. As for the counts, yes, I use a few counts, I will try to look around and provide some estimates of them (might not be very precise, I might miss some, unless there is some smart grep way to run on the state file). Also, I assume that the "desired" value of any autoscaling group I use is not considered to be a "count", correct?

@mitchellh
Copy link
Contributor

Can you confirm that there are any leftovers?

We've actually seen issues where the way we count destruction or addition is actually broken, rather than Terraform itself doing the wrong thing. This is still a bug, of course, but a lot less serious thankfully.

A repro case would indeed help a lot. I heard of this happening before but I've never found a repro case.

@radeksimko
Copy link
Member

Also, I assume that the "desired" value of any autoscaling group I use is not considered to be a "count", correct?

Not in this context, even thought practically it may have the same effect.

What we're most likely looking for is a core bug (based on the error message) and as such it would be triggered when Terraform is creating multiple resources (count) rather than AWS internally. desired is just a field that's passed down to API (i.e. that's less likely to cause such issue).

Full repro would really help though.

@BogdanSorlea
Copy link
Author

Hey,
Disergarding the "desired" value of the ASGs (see comments above), I have counted the number of counts used in the tf files and referenced modules (multiplying accordingly) and I could only come up with 65 of them. So I don't know where the other 62 come from (1004 - 877 = 127 = 65 + 62).

Regarding the full repro, I still can't provide that.

@mitchellh
Copy link
Contributor

I think this isn't Terraform losing resources, though please correct me if I'm wrong.

We just fixed another issue in 0.7.9 where the count was incorrect (in the case of destroying deposed resources). This may fix your count issue, but I strongly believe we're still just miscounting some types of actions. Until we get a more specific repro or example, its hard to keep this issue open.

HOWEVER, if Terraform is in fact losing resources then it'd be a major, major issue. To date we've never done that (as far as I can remember) and if thats the case please do open a new issue and let us know.

@ghost
Copy link

ghost commented Apr 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants