Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing certificate FROM an ACM CA via aws_acm_certificate forces re-creation every apply #11201

Open
LukasKnuthImagineOn opened this issue Dec 9, 2019 · 7 comments
Labels
service/acm Issues and PRs that pertain to the acm service. service/elb Issues and PRs that pertain to the elb service. stale Old or inactive issues managed by automation, if no further action taken these will get closed. waiting-response Maintainers are waiting on response from community or contributor.

Comments

@LukasKnuthImagineOn
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.16

  • provider.aws v2.40.0
  • provider.template v2.1.2

Affected Resource(s)

  • aws_acm_certificate

Terraform Configuration Files

This is not the full config, but (as far as I can reason) the relevant parts of it.

resource "aws_acm_certificate" "flake_access" { # todo This triggers a re-create every plan!
  lifecycle {
    create_before_destroy = true
  }

  private_key = file("${path.module}/certs/priv_no_enc.pem")
  certificate_body = file("${path.module}/certs/cert.pem")
  certificate_chain = file("${path.module}/certs/chain.pem")
}

resource "aws_elb" "web" {
  name = "gscb-elb"

  subnets         = [aws_subnet.default.id]
  security_groups = [aws_security_group.elb.id]

  listener {
    lb_port = 9988
    lb_protocol = "ssl"
    instance_port = 9988
    instance_protocol = "tcp"
    ssl_certificate_id = aws_acm_certificate.flake_access.arn
  }
}

Panic Output

No panic

Plan Output

Note: I have shortened the output for the load-balancer to the relvant listener. Also, I have replaced the Account-IDs in both ARNs.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
+/- create replacement and then destroy

Terraform will perform the following actions:

  # aws_acm_certificate.flake_access must be replaced
+/- resource "aws_acm_certificate" "flake_access" {
      ~ arn                       = "arn:aws:acm:eu-central-1:redacted:certificate/2851432e-4b0d-4a95-a7e2-9df30b62c9bd" -> (known after apply)
      - certificate_authority_arn = "arn:aws:acm-pca:eu-central-1:ohter_account:certificate-authority/4fd3634f-ecf0-4912-b52f-b3901e28cf18" -> null # forces replacement
        certificate_body          = "879c08c6fbee675fa5a5738ee05adf346e48a333"
        certificate_chain         = "a6782ca0bcb4032cc20c3f4e88abfce4c7ad866f"
      ~ domain_name               = "some.domain.name" -> (known after apply)
      ~ domain_validation_options = [] -> (known after apply)
      ~ id                        = "arn:aws:acm:eu-central-1:redacted:certificate/2851432e-4b0d-4a95-a7e2-9df30b62c9bd" -> (known after apply)
        private_key               = (sensitive value)
      ~ subject_alternative_names = [] -> (known after apply)
      - tags                      = {} -> null
      ~ validation_emails         = [] -> (known after apply)
      ~ validation_method         = "NONE" -> (known after apply)

      - options {
          - certificate_transparency_logging_preference = "DISABLED" -> null
        }
    }

  # aws_elb.web will be updated in-place
  ~ resource "aws_elb" "web" {
        arn                         = "arn:aws:elasticloadbalancing:eu-central-1:redacted:loadbalancer/gscb-elb"
        dns_name                    = "some.dns.name"
        id                          = "gscb-elb"

      - listener {
          - instance_port      = 9988 -> null
          - instance_protocol  = "tcp" -> null
          - lb_port            = 9988 -> null
          - lb_protocol        = "ssl" -> null
          - ssl_certificate_id = "arn:aws:acm:eu-central-1:redacted:certificate/2851432e-4b0d-4a95-a7e2-9df30b62c9bd" -> null
        }
      + listener {
          + instance_port      = 9988
          + instance_protocol  = "tcp"
          + lb_port            = 9988
          + lb_protocol        = "ssl"
          + ssl_certificate_id = (known after apply)
        }
    }

Plan: 1 to add, 1 to change, 1 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Expected Behavior

Important The certificate I'm importing is created via AWS ACM CA from another account. Then, I export the certificate via the console, use OpenSSL to decrypt the private-key and import it via Terraform.

I expect the certificate to be importet successfully (which it is) and to remain unchanged on subsequent Terraform runs.

Actual Behavior

The certificate is scheduled to be re-importet on every run. Any dependencies (such as ELB) are re-created aswell.

The plan output shows that the contents of the certificate haven't changed, but the certificate_authority_arn-field forces the replacement. My guess is that since the cert was exported from another account and the ARN refers to it, this causes weird behaviour?

Steps to Reproduce

  1. Create a ACM CA in Account A
  2. Issue a cert with the CA and export it
  3. Use OpenSSL to decrypt the private-key: openssl rsa -in priv.pem -out priv_no_enc.pem
  4. In Account B, import the certificate via Terraform.
  5. Run Terraform again, the certificate will be re-imported.

Important Factoids

To re-iterate: The certificate I'm importing is created via AWS ACM CA from another account. Then, I export the certificate via the console, use OpenSSL to decrypt the private-key and import it via Terraform.

References

@ghost ghost added service/acm Issues and PRs that pertain to the acm service. service/elb Issues and PRs that pertain to the elb service. labels Dec 9, 2019
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Dec 9, 2019
@jordanbcooper
Copy link

I seem to be having this issue. After creating these certs + validations, the next run recreates the certs and validations, and attaches the new certs to the CF repos.

Terraform v1.0.8
on linux_amd64
+ provider registry.terraform.io/cloudflare/cloudflare v3.3.0
+ provider registry.terraform.io/hashicorp/aws v3.63.0

here are what I think are some relevant snippets from the module I wrote for our cf + s3 + acm setup.

resource "aws_acm_certificate" "cert" {
  provider                  = aws.us-east-1
  domain_name               = var.domain_name
  subject_alternative_names = ["${var.domain_name}"]
  validation_method         = "DNS"
  lifecycle {
    create_before_destroy = true
  }
}

resource "cloudflare_record" "certvalidation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options: dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }
  allow_overwrite = true
  name            = each.value.name
  value           = each.value.record
  ttl             = 60
  type            = each.value.type
  zone_id         = var.zoneid
  
}

resource "aws_acm_certificate_validation" "certvalidation" {
  provider                = aws.us-east-1
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in cloudflare_record.certvalidation: record.hostname ]
}

@justinretzolk
Copy link
Member

Hey all 👋 Thank you for taking the time to file this issue, and for the additional information.

@LukasKnuthImagineOn - since you'd initially filed the report (and because I don't want to only address the newer comment!), given that there's been a number of AWS provider and Terraform releases since you initially filed it, are you able to confirm if you're still running into the original behavior?

@jordanbcooper - since you reported this more recently; is this occurring after an import as well, as mentioned in the initial report, or are you seeing this looping behavior after only creating the resource(s)?

@justinretzolk justinretzolk added waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels Nov 9, 2021
@jordanbcooper
Copy link

@justinretzolk These are new resources, never imported.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Nov 10, 2021
@justinretzolk
Copy link
Member

Hey @jordanbcooper 👋 Thanks for confirming that. Since your situation isn't after an import, and it looks like you're not using imported (existing) certificates, I think the behavior you're experiencing might differ a bit from the original issue reported here. Would you mind opening a new issue with the relevant information in the issue template so we can separate these out a bit so that we don't cause any unnecessary confusion?

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label Nov 10, 2021
@jordanbcooper
Copy link

Will do, sorry about that @justinretzolk

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Nov 14, 2021
@jordanbcooper
Copy link

@justinretzolk #21770

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label Nov 15, 2021
Copy link

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale Old or inactive issues managed by automation, if no further action taken these will get closed. label Nov 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service/acm Issues and PRs that pertain to the acm service. service/elb Issues and PRs that pertain to the elb service. stale Old or inactive issues managed by automation, if no further action taken these will get closed. waiting-response Maintainers are waiting on response from community or contributor.
Projects
None yet
Development

No branches or pull requests

3 participants