This repository has been archived by the owner on Feb 22, 2022. It is now read-only.
[stable/external-dns] Tutorial instructions lead to ownership conflict in multi-cluster set up #22196
Labels
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
Describe the bug
Following the helm install tutorial instructions leads to a DNS record ownership conflict in AWS Route 53 if installing external-dns into multiple clusters that need to create/delete/update records in the same zone.
https://github.com/helm/charts/tree/master/stable/external-dns#tutorials
The tutorial shows using the
HOSTED_ZONE_IDENTIFIER
as thetxtOwnerId
which is fine if you only need one deployment of external-dns to have ownership of records in that DNS zone. If you follow these instructions and deploy external-dns into a second cluster(or more) with the intent to be able to manipulate records in the same DNS zone and you set the sameHOSTED_ZONE_IDENTIFIER
, then both external-dns instances will fight for ownership causing a delete/create loop of ingress records.Version of Helm and Kubernetes:
Helm v3.1.2 and Kubernetes version 1.15
Which chart:
stable/external-dns https://github.com/helm/charts/tree/master/stable/external-dns#external-dns
What happened:
Following the tutorial and setting the same
HOSTED_ZONE_IDENTIFIER
in multiple deployments of external-dns causes an ownership conflict. One instance of external-dns will detect that it didn't create a record that another external-dns instance created, thus causing the first instance to delete the record. The second instance detects the record isn't there so again creates it and the loop repeats.What you expected to happen:
Instructions/tutorial should be more clear, as they are in other places within the helm chart, that the
txtOwnerId
should not match across different deployments of external-dns if those deployments are meant to have ownership of records in the same zone. Wording should be changed to indicate using a value fortxtOwnerId
that is specific to the cluster the user is deploying external-dns into.We worked around this by setting
txtOwnerId: clusterA
for the first cluster,txtOwnerId: clusterB
for the second cluster, and so on. Both external-dns deployments have the same domain thedomainFilters
parameter, for example both are set todomainFilters: whatever.company.com
.How to reproduce it (as minimally and precisely as possible):
Deploy external-dns to multiple clusters using the same
HOSTED_ZONE_IDENTIFIER
with the samedomainFIlters
value.The text was updated successfully, but these errors were encountered: