Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No longer able to link to multiple VPCs private zones across multiple AWS accounts #7805

Closed
tomelliff opened this issue Mar 4, 2019 · 3 comments
Labels
service/route53 Issues and PRs that pertain to the route53 service. upstream Addresses functionality related to the cloud provider.

Comments

@tomelliff
Copy link
Contributor

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.11.10

  • provider.aws v1.57.99

Note that the 1.57.99 version is #6314 based on the 1.57 release.

Affected Resource(s)

  • aws_route53_zone

Terraform Configuration Files

Old style worked fine as this was not exclusive and allowed other VPCs to be linked to the zone:

resource "aws_route53_zone" "private_zone" {
  name   = "${var.name}"
  vpc_id = "${data.aws_vpc.selected.id}"
}

Adding more VPCs to the private zone out of band or in a different module (such as by using the aws_route53_zone_association) worked fine.

Using the new vpc.vpc_id parameter instead of the vpc_id parameter:

resource "aws_route53_zone" "private_zone" {
  name = "${var.name}"

  vpc {
    vpc_id = "${aws_vpc.vpc.id}"
  }
}

Now shows a diff such as below:

  ~ module.vpc.aws_route53_zone.private_zone
      vpc.#:                 "2" => "1"
      vpc.2048611775.vpc_id: "vpc-0a1a1b50846f90fde" => "vpc-0a1a1b50846f90fde"
      vpc.2371360545.vpc_id: "vpc-78e6b21d" => ""

Where it is trying to remove the vpc-78e6b21d VPC that was added outside of the resource as part of a VPC peering module.

This would maybe be fine if we could actually use the aws_route53_zone to do cross account stuff but that's not possible so we have a mess of stuff to handle this working across accounts that is now broken by the change to the new format being exclusive. Even then though it would be difficult to work out how to structure our code to handle VPCs and peering differently here.

As noted in https://www.terraform.io/docs/providers/aws/r/route53_zone_association.html we can ignore changes here but this seems like a more painful regression than the issues fixed by #6299 to me but that's viewed from my use case where I don't care about cross region VPC peering yet.

@bflad Is there a reason that the vpc block needs to be an exclusive set here?

Expected Behavior

The other VPC(s) shouldn't be removed. If the defined VPC has been removed outside of Terraform or the Terraform code changes then Terraform should update appropriately as before.

Actual Behavior

Other VPCs are removed.

Steps to Reproduce

  1. terraform apply

Important Factoids

We're doing cross account VPC peering and sharing the private zones with each peered VPC. Our cross account Route53 zone association code is a mess of shelling out to handle the fact that it's not yet doable in Terraform directly. I can provide this code if people need but don't think it's necessary to replicate the issue.

References

@bflad bflad added the service/route53 Issues and PRs that pertain to the route53 service. label Mar 4, 2019
@bflad
Copy link
Contributor

bflad commented Mar 6, 2019

Hi @tomelliff 👋 Thanks for writing in.

This situation was more complicated than many other Terraform AWS Provider resources because of the perpetual requirement for at least one VPC association for private hosted zones and Route53's lack of support for a private zone with no associations. Previously, there no way to declare all associations within the same resource, which meant that exclusive management of the associations was not achievable and the import logic would need to be customized for just this resource.

The current iteration for cross-account VPC associations (e.g. #2005) certainly does make this situation undesirable with requiring ignore_changes, but we are unlikely to roll back the vpc configuration block change until we can further assess how to improve the workflow with Terraform resources and Route 53. We will continue to encourage people to reach out through their AWS Support or TAM channels to request the service team improve their APIs to better support an idempotent configuration where resources in each account can authoritatively determine their association/acceptance status.

As you alluded to above, the likely best handling of your configuration is with ignore_changes as noted in the aws_route53_zone_association resource documentation, which is the basis for how to support cross-account associations given the current situation:

resource "aws_vpc" "primary" {
  cidr_block           = "10.6.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
}

resource "aws_vpc" "secondary" {
  cidr_block           = "10.7.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
}

resource "aws_route53_zone" "example" {
  name = "example.com"

  # NOTE: The aws_route53_zone vpc argument accepts multiple configuration
  #       blocks. The below usage of the single vpc configuration, the
  #       lifecycle configuration, and the aws_route53_zone_association
  #       resource is for illustrative purposes (e.g. for a separate
  #       cross-account authorization process, which is not shown here).
  vpc {
    vpc_id = "${aws_vpc.primary.id}"
  }

  lifecycle {
    ignore_changes = ["vpc"]
  }
}

resource "aws_route53_zone_association" "secondary" {
  zone_id = "${aws_route53_zone.example.zone_id}"
  vpc_id  = "${aws_vpc.secondary.id}"
}

@bflad bflad added the upstream Addresses functionality related to the cloud provider. label Mar 6, 2019
@tomelliff
Copy link
Contributor Author

Yeah I guess it's just my aversion to using ignore_changes as it feels like I'm doing something wrong.

If there's no way to handle the other use cases but this use case has a work around then I think it's fair to call this as expected. This is especially fair when the aws_route53_zone_association explicitly documents this already.

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
service/route53 Issues and PRs that pertain to the route53 service. upstream Addresses functionality related to the cloud provider.
Projects
None yet
Development

No branches or pull requests

2 participants