Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/external-dns] Tutorial instructions lead to ownership conflict in multi-cluster set up #22196

Closed
cuzzo333 opened this issue Apr 29, 2020 · 3 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@cuzzo333
Copy link

Describe the bug
Following the helm install tutorial instructions leads to a DNS record ownership conflict in AWS Route 53 if installing external-dns into multiple clusters that need to create/delete/update records in the same zone.

https://github.com/helm/charts/tree/master/stable/external-dns#tutorials

The tutorial shows using the HOSTED_ZONE_IDENTIFIER as the txtOwnerId which is fine if you only need one deployment of external-dns to have ownership of records in that DNS zone. If you follow these instructions and deploy external-dns into a second cluster(or more) with the intent to be able to manipulate records in the same DNS zone and you set the same HOSTED_ZONE_IDENTIFIER, then both external-dns instances will fight for ownership causing a delete/create loop of ingress records.

$ helm install my-release \
  --set provider=aws \
  --set aws.zoneType=public \
  --set txtOwnerId=HOSTED_ZONE_IDENTIFIER \

Version of Helm and Kubernetes:
Helm v3.1.2 and Kubernetes version 1.15

Which chart:
stable/external-dns https://github.com/helm/charts/tree/master/stable/external-dns#external-dns

What happened:
Following the tutorial and setting the same HOSTED_ZONE_IDENTIFIER in multiple deployments of external-dns causes an ownership conflict. One instance of external-dns will detect that it didn't create a record that another external-dns instance created, thus causing the first instance to delete the record. The second instance detects the record isn't there so again creates it and the loop repeats.

What you expected to happen:
Instructions/tutorial should be more clear, as they are in other places within the helm chart, that the txtOwnerId should not match across different deployments of external-dns if those deployments are meant to have ownership of records in the same zone. Wording should be changed to indicate using a value for txtOwnerId that is specific to the cluster the user is deploying external-dns into.

We worked around this by setting txtOwnerId: clusterA for the first cluster, txtOwnerId: clusterB for the second cluster, and so on. Both external-dns deployments have the same domain the domainFilters parameter, for example both are set to domainFilters: whatever.company.com.

How to reproduce it (as minimally and precisely as possible):
Deploy external-dns to multiple clusters using the same HOSTED_ZONE_IDENTIFIER with the same domainFIlters value.

@carrodher
Copy link
Collaborator

Hi,

Given the stable deprecation timeline, this Bitnami maintained Helm chart is now located at bitnami/charts. Please visit the bitnami/charts GitHub repository to create Issues or PRs.

In this issue we tried to explain more carefully the reasons and motivations behind this transition, please don't hesitate to add a comment in this issue if you have any question related to the migration itself.

@stale
Copy link

stale bot commented May 30, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 30, 2020
@stale
Copy link

stale bot commented Jun 13, 2020

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Jun 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

2 participants