-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider Removed Attributes Causing "data could not be decoded from the state: unsupported attribute" Error #25752
Comments
Thanks @bflad, I located the problem with data sources in 0.13, but I'm not certain where the failure reported for managed resources would come from. I assume the failure with a managed resources would be the similar for 0.13 and 0.12, but I haven't been able to replicate with either version yet. |
It looks like some of the reports aren't valid, as they may be actually referencing attributes that were removed in 3.0.0, e.g. hashicorp/terraform-provider-aws#14431 (comment) I think the incoming PR should cover the cases seen here, but can re-evaluate if there is a reproduction with a managed resource too. |
This probably doesn't help, but I get this failure with managed resources, specifically The
In the case of both resources, there are also
Terraform 0.12.29 with AWS provider 3.0.0 and 3.1.0 does not exhibit this behaviour, Terraform 0.13.0 (release and RC) with AWS provider 3.0.0 and 3.1.0 does. At this time, it looks like this completely blocks upgrading to Terraform 0.13.0 for users in this situation. Edit: I've just noticed that this also breaks
There is no more output shown after the |
Same happens.
|
I, too, had an issue after upgrading to v0.13.0 with a data source for aws/aws_availability_zones. I solved it by removing references to the data source, then executing |
I'm seeing this with data source aws/aws_iam_role on Terraform v0.13.0 and AWS provider v3.1.0.
That attribute is not referenced. my-module/main.tf line 158:
This is in a common module repository so removing reference to the data source and adding it back isn't an option. |
The issue is with the state, not with the .tf files. I encountered a bunch of such messages today while working on a relatively big project and the working solution was to manually remove all deprecated or removed attributes from the state file. Once they all were gone, terraform plan worked normally again. |
We also ran into this yesterday. The combination of 0.13 + 3.0.0/3.1.0 aws provider gives us this (one of many we ran into)
We have not now, nor ever, used the 'request_parameters_in_json' attribute. We also testing deploying anew stack, upgrading version, and then running a "terraform plan" against that new stack and get the same error. I understand deprecated/removed attributes, but we can't remove ones we never used. Additionally, we have also seen the same behavior @phyber noted with regard to 'terraform show' (Ran into it trying to troubleshoot) |
You can, since they are in the state file ... |
You shouldn't need to edit the state manually though... |
I don't like the manual editing as well, but it is the only thing that worked. |
Of course, ideally, terraform itself should just be able to remove the deprecated or removed attributes, since they don't provide any meaning anymore, instead of throwing errors. |
Editing state manually isn't a "solution". Not only is that overwhelmingly a "hack", but I would have to do that across potentially hundreds or even thousands of stacks. And rolling back to a previous version doesn't solve the problem, since the issue is caused by Terraform inserting null values for unused params in the state file. Now that those params have been deprecated/removed, it is complaining about them being in the state. Problem is Terraform is the one that put them there on its own. There needs to be some way created to gracefully handle these unknown values, maybe an override/ignore flag, so that we can upgrade and continue to carry currently resources forward. |
I am not saying that the manual editing of the state file as the way it should be done in future, just a quick fix or hack if you want to be able to continue. Of course, it is not great if you have 1000 of stacks, but then one could create a program or a script to do so. Or to wait for Terraform itself to fix this. |
Yeah, ok so the only real solution other then rolling back is to do something like this:
then edit that json to remove the now removed attributes from being managed. You will have to manually increment the Luckily there is a some validation on the terraform state push command so when you do a:
It shouldn't let you break your remote state. If your state is massive it can be very tedious, I would recommend if you are running any regex find/replace to save a copy and do a diff to verify the changes, also then you have a copy of the original state to fall back to. Luckily our terraform repos make heavy use of terraform_remote_state to break our state into small manageable pieces, which is read only. So far it has not been an issue using terraform_remote_state with a .13 binary from a .12 managed state backend. So we can make fixes incrementally. |
That is quite similar to what I've done. We will see how many more downvotes the suggestions of editing the state will get from the purists here ... |
I agree this should probably be handled more programaticly via an option within the 0.13upgrade command or maybe some other more safe state manipulation/fix cli commands that would allow attribute fixing. But at the end of the day, if you are upgrading or began upgrading and going back is more of an unknown then going forward then you gotta be pragmatic about the tools you have. Manual state manipulation on a large scale is defiantly bad practice under normal operational conditions, but this is a bug. |
Exactly my thoughts too |
I second this. I understand there are a few ways to Houdini my way around the issue, and I genuinely appreciate the suggestions. However, that said, I'm in a regulated industry with audited pipelines and workflows...editing the state in production is a non-starter for us. We ran into this error in dev pipelines that automatically test with latest tools. For now, we were able to rollback and pin versions. But we need a better go-forward plan than what is currently available. As it stands, I would most certainly consider this a bug, and a total blocker to upgrading. |
Those having issues and do not want to modify the state manually, follow the next steps to rollback:
|
@brucedvgw this was closed by #25779 because was a fix was merged to master—but its not in a release yet. subscribe to releases and watch the changelog to confirm when the bug fix lands in a release, |
Thanks @notnmeyer, I will keep an eye out for the release. In the meantime I have some fixing up to do. 🤞👍 |
BTW @eduardopuente , you have a typo, and a missing sudo in your command, it should be: |
Just wanted to recommend the tfswitch/ tgswitch (for terragrunt users) tools, it does all the legwork for you |
this solved the issue for me: #25819 (comment) |
Amazing, for me too! 🎉 Had been trying to fix that pesky |
it doesn't work for me...anyone else? it returned |
This is also happening with the for |
May be try #25819 (comment) |
If anyone still has this issue while waiting for the fix to be released I wrote a quick script to automate the state modification process. It pulls the state file, removes all usage of a specified attribute, and after you review will commit it back to your state. https://gist.github.com/AlienHoboken/60db4572f087f82446a5c64e617386d6 The script depends on jq and should be run from your terraform workspace. terraform_remove_attrs.sh remove [attribute_name] ❯ ~/terraform_remove_attrs.sh remove request_parameters_in_json
Please review diff and run "terraform_remove_attrs.sh commit" to continue
4c4
< "serial": 14,
---
> "serial": 15,
42d41
< "request_parameters_in_json": null,
161d159
< terraform_remove_attrs.sh commit ❯ ~/terraform_remove_attrs.sh commit
Commiting state.new.json to workspace Really not a fan of the manual state modification but this lets us use 0.13 while also taking care of this issue. Looking forward to the fix being released! |
For anyone else coming here after seeing similar errors, this is now fixed in release v0.13.1 🎉 |
@stellirin this is still an issue when trying to import a resource using 0.13.1 and aws provider 3.4... |
This error fixed automatically for me using v0.13.2 :) |
FWIW it was also fixed for me using v0.13.1 (even though v0.13.2 is out) |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Although its also being reported with v0.12.29.
Terraform Configuration Files
main.tf
testmodule/main.tf:
Debug Output
Please ask if you need it.
Expected Behavior
Since there are no references to any removed attributes, no errors after upgrading the provider.
Actual Behavior
Steps to Reproduce
terraform init
terraform apply
version = "2.70.0"
toversion = "3.0.0"
terraform init
terraform apply
References
The text was updated successfully, but these errors were encountered: