Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outputs populated from deposed resources #11716

Closed
alext opened this issue Feb 6, 2017 · 5 comments · Fixed by #25419
Closed

Outputs populated from deposed resources #11716

alext opened this issue Feb 6, 2017 · 5 comments · Fixed by #25419
Labels
bug config v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@alext
Copy link

alext commented Feb 6, 2017

When updating an aws_iam_server_certificate that's in use (using create_before_destroy lifecycle), the output in the tfstate file continues to contain the value from the deposed version of the resource instead of the newly created version.

Terraform Version

$ terraform -v
Terraform v0.8.5

Affected Resource(s)

We're seeing this with an aws_iam_server_certificate resource, but I suspect this is not specific to this resource type.

Terraform Configuration Files

resource "aws_iam_server_certificate" "cert" {
  name_prefix = "tf-test-cert-"
  certificate_body = "${file("server.crt")}"
  private_key = "${file("server.key")}"
  lifecycle {
    create_before_destroy = true
  }
}

output "cert_id" {
  value = "${aws_iam_server_certificate.cert.id}"
}

Debug Output

Debug log from both update runs (described below) here: https://gist.github.com/alext/c7696a250f97a8ed6cebb4db6858b026

Actual Behavior

When running terraform to update the certificate, the new certificate is created, and the old one fails to be destroyed (this is expected because it's still in use), but the outputs in the tfstate file continue to contain the values from the old certificate.

A second run (which also fails to delete the deposed certificate) updates the outputs to contian the values for the new certificate.

Expected Behavior

The outputs should have been updated on the first terraform run when the new certificate was first created.

Steps to Reproduce

  • Generate a SSL cert
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout server.key -out server.crt
  • Run terraform apply with the example config above
$ terraform apply .
aws_iam_server_certificate.cert: Creating...
  arn:              "" => "<computed>"
  certificate_body: "" => "480b0641c7e75c3c8616b499300feb3954a14f6b"
  name:             "" => "<computed>"
  name_prefix:      "" => "tf-test-cert-"
  path:             "" => "/"
  private_key:      "" => "1e99a0e8e0c8cce133ee5514fb8e6b10795e2f97"
aws_iam_server_certificate.cert: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

cert_id = ASCAIBUB75J7JG3YJZNBK
  • Make an ELB that uses this cert (or update an existing one to use it).
  • Generate a new cert/key:
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout server.key -out server.crt
  • Run terraform to replace the cert, which will create a new resource and error when deleting the old one (as expected because it is in use):
$ TF_LOG=DEBUG TF_LOG_PATH=update1.log terraform apply .
aws_iam_server_certificate.cert: Refreshing state... (ID: ASCAIBUB75J7JG3YJZNBK)
aws_iam_server_certificate.cert: Creating...
  arn:              "" => "<computed>"
  certificate_body: "" => "8afbe4092d5db6229075fffa514849279191133e"
  name:             "" => "<computed>"
  name_prefix:      "" => "tf-test-cert-"
  path:             "" => "/"
  private_key:      "" => "8544c0ee87c1e76e4fccc4b7b542b4588bae9fbb"
aws_iam_server_certificate.cert: Creation complete
aws_iam_server_certificate.cert (deposed #0): Destroying...
aws_iam_server_certificate.cert (deposed #0): Still destroying... (10s elapsed)
...SNIP...
aws_iam_server_certificate.cert (deposed #0): Still destroying... (2m40s elapsed)
aws_iam_server_certificate.cert (deposed #0): Still destroying... (2m50s elapsed)
Error applying plan:

1 error(s) occurred:

* aws_iam_server_certificate.cert (deposed #0): DeleteConflict: Certificate: ASCAIBUB75J7JG3YJZNBK is currently in use by arn:aws:elasticloadbalancing:eu-west-1:123456789012:loadbalancer/test-elb. Please remove it first before deleting it from IAM.
	status code: 409, request id: 45d3fe95-ec62-11e6-9897-4f1aba564e64

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

At this point the output in the tfstate file contains the ID of the deposed cert (ASCAIBUB75J7JG3YJZNBK in this example), and not the newly created one.

  • Run terraform apply a second time
$ TF_LOG=DEBUG TF_LOG_PATH=update2.log terraform apply .
aws_iam_server_certificate.cert: Refreshing state... (ID: ASCAIE47CWA2HHSZEYMK2)
aws_iam_server_certificate.cert (deposed #0): Destroying...
aws_iam_server_certificate.cert (deposed #0): Still destroying... (10s elapsed)
...SNIP...
aws_iam_server_certificate.cert (deposed #0): Still destroying... (2m40s elapsed)
aws_iam_server_certificate.cert (deposed #0): Still destroying... (2m50s elapsed)
Error applying plan:

1 error(s) occurred:

* aws_iam_server_certificate.cert (deposed #0): DeleteConflict: Certificate: ASCAIBUB75J7JG3YJZNBK is currently in use by arn:aws:elasticloadbalancing:eu-west-1:123456789012:loadbalancer/test-elb. Please remove it first before deleting it from IAM.
	status code: 409, request id: f037d682-ec62-11e6-b129-2be51eb74486

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

At this point the tfstate file contains the correct ID in the output value.

@mitchellh
Copy link
Contributor

Thanks. This is working as intended currently: because the full CBD fails, the resource itself never "completes" so the output is not updated. Put another way: the output's update depends on the successful completion of all resources it depends on. If the destroy or create fails in CBD, the completion failed, so the output isn't updated.

I think we can do better here by constraining an output update to only the creation side of the CBD, though.

@alext
Copy link
Author

alext commented Feb 6, 2017

Hi, thanks for the response.

The thing is that this is currently inconsistent between the first and second update runs. Both attempt to destroy the deposed resource, and fail, but on the second run the output is updated, whereas on the first run, it isn't. At the moment we're able to work around the issue by running the update twice.

@apparentlymart
Copy link
Contributor

I spent a little time thinking through what behavior I'd actually expect here.

The most intuitive behavior would be for outputs to closely follow their dependencies: any time one of the dependency variables changes in the state, update the output immediately and flush both the resource update and the output update to persistent state at the same time.

I think that result could be achieved without extensive surgery on the apply graph architecture: just make updating the outputs a side-effect of the EvalWriteState action, rather than an explicit action on the output nodes. (The output node visit would then just be a no-op, I think.)

A brute-force way to do it would be to just re-evaluate all of the outputs in the current module each time any resource state is updated. For all normal cases where the set of outputs is small I expect this would be sufficient.

There are some degenerate cases: either having thousands of outputs, or having a single output that does an expensive operation such as reading a huge file from disk using file(...). Should we want to efficiently support these we could optimize by finding all the graph edges that refer to outputs and annotating the EvalWriteState instances with a list of outputs to update during the graph construction and eval-chain-building phases. I'd personally lean towards just advising people not to do either of these things.

@apparentlymart
Copy link
Contributor

Hi all! This one's been quiet for a while.

I ended up bumping back into this old issue by accident while working on #21762; I sneaked up on it from a different angle because this time I visited the non-create_before_destroy variant of this issue, and only made the connection back here after I traced it all out and realized it had the same general problem:

The apply walk often ends up having multiple nodes for the same resource that each make a different side-effect, but we only represent the named values (a collective term for local values, output values, and module input variables) by one node in the graph each, and thus the graph builder ends up trying to use those single nodes at multiple points in the walk. Because that's impossible, the actual result is either that the update happens too late (in the create_before_destroy case, for this bug) or the constructed graph contains a cycle (in the normal destroy case, for #21662 and #21762).

I left some more detailed notes in a comment on #21762 that describe what is essentially an evolution of my idea above: re-evaluating the named values constantly during the walk, each time their dependencies change. It'll take some more experimentation to see if that plan can translate from theory to practice, and it could be an invasive enough change that it'd need to wait for a future major release. We'll see!

@hashibot hashibot added v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases labels Aug 29, 2019
@ghost
Copy link

ghost commented Jul 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jul 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug config v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants