-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: leftover module module.ap_southeast_1_vpc_2 in state that should have been removed; this is a bug in Terraform and should be reported #21313
Comments
any updates on this? |
What I resolve manually for now is …pull the state file, manually edit and remove all the branches of |
This appears to be a duplicate of / related to #21529 which was closed because it couldn't be reproduced. |
I'm hitting the same message but for a different module, and I can reliably reproduce this with |
I've hit this issue also. It seems to be when you have previously done a destroy on a module then do an Doing a noop apply on the destroyed module seems to allow you to move past this error. i.e. do an apply on the module from the error like this |
After recently updating to 0.12.3, this issue is still happening on my end |
I'm seeing this happening on almost every edit: running 0.12.3 |
Also getting this issue when removing a module with terraform:
error:
There was no reference to any resource in based on @gabrielqs comment I pulled the state from remote gcs bucket:
I then found 7 references to my module {
"module": "module.manager",
"mode": "managed",
"type": "google_project_iam_custom_role",
"name": "app_bucket",
"each": "list",
"provider": "provider.google",
"instances": []
} Manually deleted all the json objects with reference to the delete module (also increased
Have just run a plan and apply and that has removed the error.
|
thanks for the detailed explanation @hawksight , that's exactly the process i have been following |
|
I'm not sure if this will help in re-creation but I'm getting into a mess. On #21346 is regarding trouble moving resources to a module that does not exist yet in 0.12. Following the best hack-around (#21346 (comment)), I solved my problem but seems that I'm left with the situation of |
I think I've come up with a minimal reproduction case for this (using terraform v0.12.5). With the following config:
I'd expect terraform to ignore mod2 in the statefile and leave it untouched - this is the behaviour with Terraform 0.11. |
Yeah the reproduction case by @alext matches what I'm seeing too in my case: with TF 0.12.8 I'm doing an |
Fixed by #22811 and will release with 0.12.11 |
#22811 seems to fix the use-case where an orphaned module still exists in the code. What if the code has been removed as well, but we'd like the state to remain as is in a targeted run? I'm trying to share the terraform state between a poly-repo setup of terraform projects where each project sets up its terraform modules and runs a targeted plan/apply setup. It works nicely other than the Error/warning about leftover modules. Which is technically correct, but in a poly-repo world is almost inevitable. [if this use-case is considered different from the original reported one I can create a new issue] |
@axelthimm If you do have a current issue, absolutely open another one [in particular, because closed issues are eventually locked]! But testing out what I think you're asking for, it seems like 0.12.12 (current release) does what you're asking -- if you run with The current warning for
So we'd also be interested in learning more about your workflow (if you can codify anything into a specific issue), such that you could do it without |
The use case is having split the modules into several repositories, but still needing to go through a common terraform state as the target infrastructure is the same. E.g. say an Azure setup with a separate repo for proxies, another for nexus, another for gitlab-runners etc. They all depend on some common resources which are reflected in base modules always present (via git submodules) and of course a common tfstate file or a state backend. Indeed the run does work on 0.12.12, but (!) it is not a warning, but a failure. So all out IaC ATM is blocked unless we generally ignore return codes. So my plea would be to make it a warning. On the topic of not supporting this workflow - of course I have read this, but I'm trying to fit terraform in a non-monorepo design. Maybe I'm using the wrong methology, but keeping all these tiny projects (tiny from a terraform PoV) separated with their own state, one introduces too many data sources which then also need concurrent maintenance should the master resource require changes. Instead by having access to the master resource in the terrafrom state one can pretend to be in a monorepo and write proper terraform code (which then also works when merging all projects together, which is a method we also use). So there are use cases which benefit from a common terraform state, but which cannot see directly the module source anymore. I use -target just for terraform not deleting those unscribed states. Within such a project -target actually addresses the whole project. But I do not want project X to wipe project Y's resources jusyt because they are sharing the state. I hope I made the use case somewhat clear. |
Considering this, would you mind opening the comment above in a new issue? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Crash Output
Steps to Reproduce
The text was updated successfully, but these errors were encountered: