Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider Removed Attributes Causing "data could not be decoded from the state: unsupported attribute" Error #25752

Closed
bflad opened this issue Aug 5, 2020 · 37 comments · Fixed by #25779
Labels
bug confirmed a Terraform Core team member has reproduced this issue v0.12 Issues (primarily bugs) reported against v0.12 releases v0.13 Issues (primarily bugs) reported against v0.13 releases

Comments

@bflad
Copy link
Contributor

bflad commented Aug 5, 2020

Terraform Version

v0.13.0-rc1

Although its also being reported with v0.12.29.

Terraform Configuration Files

main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "2.70.0"
    }
  }

  required_version = "0.13.0"
}

provider "aws" {
  region = "us-east-2"
}

module "test" {
  source = "./testmodule"
}

output "region" {
  value = module.test.region_name
}

testmodule/main.tf:

data "aws_region" "current" {}

output "region_name" {
  value = data.aws_region.current.name
}

Debug Output

Please ask if you need it.

Expected Behavior

Since there are no references to any removed attributes, no errors after upgrading the provider.

Actual Behavior

Error: Invalid resource instance data in state

  on testmodule/main.tf line 1:
   1: data "aws_region" "current" {}

Instance module.test.data.aws_region.current data could not be decoded from
the state: unsupported attribute "current".

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. Change version = "2.70.0" to version = "3.0.0"
  4. terraform init
  5. terraform apply

References

@jbardin
Copy link
Member

jbardin commented Aug 6, 2020

Thanks @bflad,

I located the problem with data sources in 0.13, but I'm not certain where the failure reported for managed resources would come from. I assume the failure with a managed resources would be the similar for 0.13 and 0.12, but I haven't been able to replicate with either version yet.

@danieldreier danieldreier added confirmed a Terraform Core team member has reproduced this issue v0.12 Issues (primarily bugs) reported against v0.12 releases and removed new new issue not yet triaged labels Aug 7, 2020
@jbardin
Copy link
Member

jbardin commented Aug 7, 2020

It looks like some of the reports aren't valid, as they may be actually referencing attributes that were removed in 3.0.0, e.g. hashicorp/terraform-provider-aws#14431 (comment)

I think the incoming PR should cover the cases seen here, but can re-evaluate if there is a reproduction with a managed resource too.

@phyber
Copy link

phyber commented Aug 10, 2020

I'm not certain where the failure reported for managed resources would come from.

This probably doesn't help, but I get this failure with managed resources, specifically aws_iam_access_key, and aws_route53_zone. I use both of these resources with a for_each loop to create multiple instances of them with varying parameters.

The aws_iam_access_key resource complains about the ses_smtp_password attribute, while the aws_route53_zone resource complains about the vpc_id attribute. I have never used these parameters on these resources. I don't use SES at all and the Route53 zones exist in an account that doesn't make use of VPCs (it's exclusively for use by Route53).

Error: Invalid resource instance data in state

  on modules/backups/iam_user.tf line 27:
  27: resource aws_iam_access_key access {

Instance module.backups.aws_iam_access_key.access["user-example-org"] data
could not be decoded from the state: unsupported attribute
"ses_smtp_password".

Error: Invalid resource instance data in state

  on modules/zones/zone.tf line 1:
   1: resource aws_route53_zone zone {

Instance module.zones.aws_route53_zone.zone["example.org"] data could not be
decoded from the state: unsupported attribute "vpc_id".

In the case of both resources, there are also output that depends on them, however, I do not reference either of the attributes that are causing trouble. Each of these output do perform a for loop to access other attributes on the resources.

# For example
output name_servers {
  description = "name_servers of the created zones"
  value       = {
    for zone in aws_route53_zone.zone:
    zone.name => zone.name_servers
  }
}

Terraform 0.12.29 with AWS provider 3.0.0 and 3.1.0 does not exhibit this behaviour, Terraform 0.13.0 (release and RC) with AWS provider 3.0.0 and 3.1.0 does. At this time, it looks like this completely blocks upgrading to Terraform 0.13.0 for users in this situation.

Edit: I've just noticed that this also breaks terraform state show.

$ terraform state show 'module.zones.aws_route53_zone.zone["example.org"]'
unsupported attribute "vpc_id"
# module.zones.aws_route53_zone.zone["example.org"]:
resource "aws_route53_zone" "zone" {

There is no more output shown after the {.

@chathsuom
Copy link

chathsuom commented Aug 10, 2020

Same happens.

terraform state show data.aws_availability_zones.azs
unsupported attribute "blacklisted_names"
# data.aws_availability_zones.azs:
data "aws_availability_zones" "azs" {

@khaledavarteq
Copy link

terraform state show data.aws_availability_zones.azs
unsupported attribute "blacklisted_names"
# data.aws_availability_zones.azs:
data "aws_availability_zones" "azs" {

I, too, had an issue after upgrading to v0.13.0 with a data source for aws/aws_availability_zones. I solved it by removing references to the data source, then executing terraform plan and finally re-adding the reference to the data source. After that, I had no more issues using terraform plan or terraform show.

@rymancl
Copy link

rymancl commented Aug 11, 2020

I'm seeing this with data source aws/aws_iam_role on Terraform v0.13.0 and AWS provider v3.1.0.

Error: Invalid resource instance data in state

  on .terraform/modules/my-module/main.tf line 158:
 158: data "aws_iam_role" "my-role" {

Instance
module.my-module.data.aws_iam_role.my-role data
could not be decoded from the state: unsupported attribute
"assume_role_policy_document".

That attribute is not referenced. my-module/main.tf line 158:

data "aws_iam_role" "my-role" {
  name = var.my-role-name
}

This is in a common module repository so removing reference to the data source and adding it back isn't an option.

@konstl000
Copy link

The issue is with the state, not with the .tf files. I encountered a bunch of such messages today while working on a relatively big project and the working solution was to manually remove all deprecated or removed attributes from the state file. Once they all were gone, terraform plan worked normally again.

@SJM-J2
Copy link

SJM-J2 commented Aug 11, 2020

We also ran into this yesterday. The combination of 0.13 + 3.0.0/3.1.0 aws provider gives us this (one of many we ran into)

Error: Invalid resource instance data in state

on ../../tf_modules/api_gateway_resource/api_gateway_resource.tf line 51:
51: resource "aws_api_gateway_method" "this_method" {

Instance module.api_resource_history.aws_api_gateway_method.this_method["ANY"]
data could not be decoded from the state: unsupported attribute
"request_parameters_in_json".

We have not now, nor ever, used the 'request_parameters_in_json' attribute. We also testing deploying anew stack, upgrading version, and then running a "terraform plan" against that new stack and get the same error.

I understand deprecated/removed attributes, but we can't remove ones we never used.

Additionally, we have also seen the same behavior @phyber noted with regard to 'terraform show' (Ran into it trying to troubleshoot)

@konstl000
Copy link

We also ran into this yesterday. The combination of 0.13 + 3.0.0/3.1.0 aws provider gives us this (one of many we ran into)

Error: Invalid resource instance data in state

on ../../tf_modules/api_gateway_resource/api_gateway_resource.tf line 51:
51: resource "aws_api_gateway_method" "this_method" {

Instance module.api_resource_history.aws_api_gateway_method.this_method["ANY"]
data could not be decoded from the state: unsupported attribute
"request_parameters_in_json".

We have not now, nor ever, used the 'request_parameters_in_json' attribute. We also testing deploying anew stack, upgrading version, and then running a "terraform plan" against that new stack and get the same error.

I understand deprecated/removed attributes, but we can't remove ones we never used.

Additionally, we have also seen the same behavior @phyber noted with regard to 'terraform show' (Ran into it trying to troubleshoot)

You can, since they are in the state file ...

@mimozell
Copy link

You shouldn't need to edit the state manually though...

@konstl000
Copy link

You shouldn't need to edit the state manually though...

I don't like the manual editing as well, but it is the only thing that worked.
Also, if the state is in the versioned bucket, one can always roll back ...

@konstl000
Copy link

Of course, ideally, terraform itself should just be able to remove the deprecated or removed attributes, since they don't provide any meaning anymore, instead of throwing errors.

@SJM-J2
Copy link

SJM-J2 commented Aug 11, 2020

Editing state manually isn't a "solution". Not only is that overwhelmingly a "hack", but I would have to do that across potentially hundreds or even thousands of stacks.

And rolling back to a previous version doesn't solve the problem, since the issue is caused by Terraform inserting null values for unused params in the state file. Now that those params have been deprecated/removed, it is complaining about them being in the state. Problem is Terraform is the one that put them there on its own.

There needs to be some way created to gracefully handle these unknown values, maybe an override/ignore flag, so that we can upgrade and continue to carry currently resources forward.

@konstl000
Copy link

I am not saying that the manual editing of the state file as the way it should be done in future, just a quick fix or hack if you want to be able to continue. Of course, it is not great if you have 1000 of stacks, but then one could create a program or a script to do so. Or to wait for Terraform itself to fix this.

@Luwdo
Copy link

Luwdo commented Aug 11, 2020

Yeah, ok so the only real solution other then rolling back is to do something like this:

terraform state pull >> state.json

then edit that json to remove the now removed attributes from being managed.

You will have to manually increment the "serial": [number], at the top of that json file so terraform knows you are incrementing the state.

Luckily there is a some validation on the terraform state push command so when you do a:

terraform state push state.json

It shouldn't let you break your remote state.

If your state is massive it can be very tedious, I would recommend if you are running any regex find/replace to save a copy and do a diff to verify the changes, also then you have a copy of the original state to fall back to.

Luckily our terraform repos make heavy use of terraform_remote_state to break our state into small manageable pieces, which is read only. So far it has not been an issue using terraform_remote_state with a .13 binary from a .12 managed state backend. So we can make fixes incrementally.

@konstl000
Copy link

Yeah, ok so the only real solution other then rolling back is to do something like this:

terraform state pull >> state.json

then edit that json to remove the now removed attributes from being managed.

You will have to manually increment the "serial": [number], at the top of that json file so terraform knows you are incrementing the state.

Luckily there is a some validation on the terraform state push command so when you do a:

terraform state push state.json

It shouldn't let you break your remote state.

If you state is massive it can be very tedious, I would recommend if you are running any regex find/replace to save a copy and do a diff to verify the changes, also then you have a copy of the original state to fall back to.

Luckily our terraform repos make heavy use of terraform_remote_state to break our state into small manageable pieces, which is read only and so far has not been an issue to read remote state using a .13 binary to get remote state from a .12 managed state. So we can make fixes incrementally.

That is quite similar to what I've done. We will see how many more downvotes the suggestions of editing the state will get from the purists here ...

@Luwdo
Copy link

Luwdo commented Aug 11, 2020

I agree this should probably be handled more programaticly via an option within the 0.13upgrade command or maybe some other more safe state manipulation/fix cli commands that would allow attribute fixing.

But at the end of the day, if you are upgrading or began upgrading and going back is more of an unknown then going forward then you gotta be pragmatic about the tools you have. Manual state manipulation on a large scale is defiantly bad practice under normal operational conditions, but this is a bug.

@konstl000
Copy link

I agree this should probably be handled more programaticly via an option within the 0.13upgrade command or maybe some other more safe state manipulation/fix cli commands that would allow attribute fixing.

But at the end of the day, if you are upgrading or began upgrading and going back is more of an unknown then going forward then you gotta be pragmatic about the tools you have. Manual state manipulation on a large scale is defiantly bad practice under normal operational conditions, but this is a bug.

Exactly my thoughts too

@SJM-J2
Copy link

SJM-J2 commented Aug 11, 2020

I agree this should probably be handled more programaticly via an option within the 0.13upgrade command or maybe some other more safe state manipulation/fix cli commands that would allow attribute fixing.
But at the end of the day, if you are upgrading or began upgrading and going back is more of an unknown then going forward then you gotta be pragmatic about the tools you have. Manual state manipulation on a large scale is defiantly bad practice under normal operational conditions, but this is a bug.

Exactly my thoughts too

I second this.

I understand there are a few ways to Houdini my way around the issue, and I genuinely appreciate the suggestions.

However, that said, I'm in a regulated industry with audited pipelines and workflows...editing the state in production is a non-starter for us. We ran into this error in dev pipelines that automatically test with latest tools. For now, we were able to rollback and pin versions. But we need a better go-forward plan than what is currently available. As it stands, I would most certainly consider this a bug, and a total blocker to upgrading.

@eduardopuente
Copy link

eduardopuente commented Aug 14, 2020

Those having issues and do not want to modify the state manually, follow the next steps to rollback:

  • Downgrade to terraform 0.12.29
    *If using brew:
    brew install terraform@0.12
    cp -R /usr/local/Cellar/terraform@0.12/0.12.29 /usr/local/Cellar/terraform/
    brew switch terraform 0.12.29

  • Check that your versions.tf file uses version 0.12. Example:
    terraform { required_version = ">= 0.12" }

  • Run terraform init -reconfigure

@notnmeyer
Copy link

notnmeyer commented Aug 14, 2020

@brucedvgw this was closed by #25779 because was a fix was merged to master—but its not in a release yet. subscribe to releases and watch the changelog to confirm when the bug fix lands in a release,

image

@brucedvgw
Copy link

Thanks @notnmeyer, I will keep an eye out for the release. In the meantime I have some fixing up to do. 🤞👍

@julian-alarcon
Copy link

BTW @eduardopuente , you have a typo, and a missing sudo in your command, it should be:
sudo cp -R /usr/local/Cellar/terraform@0.12/0.12.29 /usr/local/Cellar/terraform/

@0mnius
Copy link

0mnius commented Aug 17, 2020

Those having issues and do not want to modify the state manually, follow the next steps to rollback:

  • Downgrade to terraform 0.12.29
    *If using brew:
    brew install terraform@0.12
    cp -R /usr/local/Cellar/terraform@0.12/0.12.29 /usr/local/Cellar/terraform/
    brew switch terraform 0.12.29
  • Check that your versions.tf file uses version 0.12. Example:
    terraform { required_version = ">= 0.12" }
  • Run terraform init -reconfigure

Just wanted to recommend the tfswitch/ tgswitch (for terragrunt users) tools, it does all the legwork for you

@mukta-puri
Copy link

this solved the issue for me: #25819 (comment)

@akindo
Copy link

akindo commented Aug 18, 2020

this solved the issue for me: #25819 (comment)

Amazing, for me too! 🎉 Had been trying to fix that pesky Using previously-installed -/aws v3.2.0 output message. The replace-provider option did both that and fix the "data could not be decoded from the state" error.

@gentksb
Copy link

gentksb commented Aug 19, 2020

this solved the issue for me: #25819 (comment)

it doesn't work for me...anyone else? it returned No matching resources found..

@mohsinhijazee
Copy link

This is also happening with the for cache_behavior and active_trusted_signers attributes for the aws_cloudfront_distribution resource types. It seems like a lot many attributes have been removed while state isn't upgraded via terraform 0.13upgrade (which only rewrites the source code at the moment) or should be subcommand of terraform state such as terraform state upgrade v0.13 etc.

@mohsinhijazee
Copy link

mohsinhijazee commented Aug 19, 2020

this solved the issue for me: #25819 (comment)

it doesn't work for me...anyone else? it returned No matching resources found..

May be try #25819 (comment)

@AlienHoboken
Copy link

AlienHoboken commented Aug 19, 2020

If anyone still has this issue while waiting for the fix to be released I wrote a quick script to automate the state modification process. It pulls the state file, removes all usage of a specified attribute, and after you review will commit it back to your state.

https://gist.github.com/AlienHoboken/60db4572f087f82446a5c64e617386d6

The script depends on jq and should be run from your terraform workspace.

terraform_remove_attrs.sh remove [attribute_name]
Pulls the state, removes all usage of the attribute, increments serial and then generates a diff for review. Example:

~/terraform_remove_attrs.sh remove request_parameters_in_json
Please review diff and run "terraform_remove_attrs.sh commit" to continue

4c4
<   "serial": 14,
---
>   "serial": 15,
42d41
<             "request_parameters_in_json": null,
161d159
< 

terraform_remove_attrs.sh commit
Pushes the changes back to your terraform state and removes the temporary files. Example:

~/terraform_remove_attrs.sh commit                                                                                                                                                    
Commiting state.new.json to workspace

Really not a fan of the manual state modification but this lets us use 0.13 while also taking care of this issue. Looking forward to the fix being released!

@stellirin
Copy link

For anyone else coming here after seeing similar errors, this is now fixed in release v0.13.1 🎉

@dimisjim
Copy link

@stellirin this is still an issue when trying to import a resource using 0.13.1 and aws provider 3.4...

@rahuljainmck
Copy link

This error fixed automatically for me using v0.13.2 :)

@TylerBrock
Copy link

FWIW it was also fixed for me using v0.13.1 (even though v0.13.2 is out)

@ghost
Copy link

ghost commented Sep 12, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 12, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug confirmed a Terraform Core team member has reproduced this issue v0.12 Issues (primarily bugs) reported against v0.12 releases v0.13 Issues (primarily bugs) reported against v0.13 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.