Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refresh > resource not found (can?) create an error #496

Closed
frntn opened this issue Oct 22, 2014 · 5 comments · Fixed by #1254
Closed

refresh > resource not found (can?) create an error #496

frntn opened this issue Oct 22, 2014 · 5 comments · Fixed by #1254

Comments

@frntn
Copy link
Contributor

frntn commented Oct 22, 2014

When refreshing the state, update behaviour seems to depends on resource type.
For example with the following repro.tf:

resource "aws_vpc" "internal" {
  cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "a" {
  vpc_id = "${aws_vpc.internal.id}"
  cidr_block = "10.0.0.0/17"
  availability_zone = "${var.region}a"
}

resource "aws_subnet" "b" {
  vpc_id = "${aws_vpc.internal.id}"
  cidr_block = "10.0.128.0/17"
  availability_zone = "${var.region}b"
}
resource "aws_db_subnet_group" "ha-rds" {
  name = "subnetgrp-ha-rds"
  description = "RDS subnet group"
  subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"]
}

If I manually delete a subnet, the terraform refresh update his tstate accordingly.
If I manually delete the db subnet group, the terraform refresh ouputs an error :

aws_vpc.internal: Refreshing state... (ID: vpc-c66daaa3)
aws_subnet.b: Refreshing state... (ID: subnet-9d48bdc4)
aws_subnet.a: Refreshing state... (ID: subnet-95e534e2)
aws_db_subnet_group.ha-rds: Refreshing state... (ID: subnetgrp-ha-rds)
Error refreshing state: DBSubnetGroupNotFoundFault: DB Subnet group 'subnetgrp-ha-rds' not found.

It doesn't crash with this sample repro.tf but I know it can with a more complex use case, because it already has => https://gist.github.com/frntn/ef6c5e1b12a6f12fc99a

version: terraform v0.3.1
platform: linux 64 (debian wheezy)

@catsby
Copy link
Contributor

catsby commented Mar 17, 2015

This still happens on master, though looking at the code I'm not sure it's a bug. The crash reported would be, but I can't reproduce that on master as of yet.

When refreshing the state, update behaviour seems to depends on resource type.

This is true; each resource defines it's own read function and can possibly behave differently. This is such a case, as you see aws_subnet removes it from the state and returns nil when not found:

Where aws_db_subnet returns the error:

If I were to choose, I would side with the aws_subnet behavior, but there may be a specific reason for the aws_db_subnet behavior. I'll defer to @mitchellh or @phinze

@phinze
Copy link
Contributor

phinze commented Mar 17, 2015

If I were to choose, I would side with the aws_subnet behavior

Agreed. I wonder if this a type of integration test we want to start a pattern for:

  1. Create resource via TF config + apply
  2. Reach out via upstream API and delete resource
  3. Verify that plan+refresh (or just another apply) yields a creation and not an error.

@catsby
Copy link
Contributor

catsby commented Mar 20, 2015

Fixed in #1254

@dharmapunk82
Copy link

dharmapunk82 commented Aug 23, 2016

Cross posting #1254

I'm hitting this on latest 0.7.1 running on ubuntu 14.04, for the eip_association resource. Lost network during a destroy then was not able to plan, apply, or taint the 2 resources in question.

I manually deleted the resources from tfstate and was then able to destroy the rest of the infrastructure, but wanted to report the bug... Plan and apply both failed with errors below prior to manually deleting them from tfstate:

Error refreshing state: 2 error(s) occurred:

aws_eip_association.ngomez_eip_assoc.1: Unable to find EIP Association: eipassoc-56135930
aws_eip_association.ngomez_eip_assoc.0: Unable to find EIP Association: eipassoc-fd13599b
Attempting to taint:
terraform taint eipassoc-56135930
Failed to parse resource name: Malformed resource state key: eipassoc-56135930

@ghost
Copy link

ghost commented Apr 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants