Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent flapping of external DNS configuration #1767

Merged
merged 2 commits into from
Nov 6, 2024

Conversation

abaguas
Copy link
Collaborator

@abaguas abaguas commented Oct 30, 2024

A flapping DNSEndpoint affects external DNS configuration, which introduces unwanted behavior during e2e tests.

How the controller uses externalDNS to configure zone delegation

K8GB uses a DNSEndpoint to configure zone delegation on the upstream DNS servers. This DNSEndpoint is picked up by ExternalDNS and looks as follows:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: extdns
  creationTimestamp: "2024-10-27T11:03:38Z"
  generation: 1
  name: k8gb-ns-extdns
  namespace: k8gb
  resourceVersion: "1608"
  uid: 2ff4476f-0efd-4de7-96e7-605ca2a8fc78
spec:
  endpoints:
  - dnsName: cloud.example.com recordTTL: 5 recordType: NS targets:
    - gslb-ns-eu-cloud.example.com
    - gslb-ns-us-cloud.example.com
  - dnsName: gslb-ns-eu-cloud.example.com recordTTL: 5 recordType: A targets:
    - 172.19.0.6
    - 172.19.0.7

This resource is independent of GSLB resources, but it is still updated on every reconciliation loop, since that is the only chance the controller has to update resources. In the end to end tests we do not know the IP address on which coreDNS is exposed, therefore we abuse the GSLB resource to fetch the IP addresses of the nodes in the cluster (see the update in controllers/providers/dns/external.go). However, this is quite some negative consequences if values are different for different GSLB resources, which happens quite often.

Flapping affects e2e tests

While trying out chainsaw I tried to increase the parallelism of tests, since I would like to have all e2e tests running simultaneously. This would prevent testing time to grow linearly with the number of strategies or ingress integration.

Frequent DNSEndpoint updates

Unfortunately, having all GSLB resources trying to modify the DNSEndpoint with different values resulted in flaky tests. This DNSEndpoint is important for the tests since it contains the records necessary for cross-cluster communication, if it is not available then K8GB instances on different clusters cannot discover their peers which leads to the following error:

2024-10-27T10:27:36Z WRN github.com/k8gb-io/k8gb/controllers/providers/assistant/gslb.go:255 > can't resolve FQDN using nameservers error="exchange error: all dns servers were tried and none of them were able to resolve, err: dial udp: lookup gslb-ns-us-cloud.example.com on 10.43.0.10:53: no such host" fqdn=localtargets-roundrobin-istio.cloud.example.com. nameservers=[{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":53}]

Example

  • A GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> A DNSEndpoint is created with empty targets, so discovery of other clusters is not yet possible
  • The same GSLB has now an IP address assigned -> The DNSEndpoint is updated with target, so discovery is now possible
  • A new GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> The DNSEndpoint is updated, since there are no targets the discovery of other clusters is no longer possible

If the timings are unfortunate enough it would be possible that cluster discovery is always unavailable when we reconcile a particular GSLB resource, thus resulting in the advertisement of incorrect targets.

Solution

This PR proposed to fix this issue by not updating the DNSEndpoint if the list of targets that comes from the gslb resource and is empty. This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

DNSEndpoint deletion

Additionally, the deletion of a GSLB resource was also leading to the deletion of the ExternalDNS resource, even if there were additional GSLB resources still in use. This also led disruption of the cross-cluster communication until the next GSLB resource is reconciled. This problem is fixed on the finalizer, by deleting the ExternalDNS resource only when the last GSLB resource is deleted.

TTL flapping

Lastly, even though this didn't affect the e2e tests I noticed that the TTL also flaps, since different GSLB resources may have different TTLs. To stabilize it we can create a new configuration option to set the TTL for the NS and glue record.

- How the controller uses externalDNS to configure zone delegation

K8GB uses a DNSEndpoint to configure zone delegation on the upstream DNS servers. This DNSEndpoint is picked up by ExternalDNS and looks as follows:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: extdns
  creationTimestamp: "2024-10-27T11:03:38Z"
  generation: 1
  name: k8gb-ns-extdns
  namespace: k8gb
  resourceVersion: "1608"
  uid: 2ff4476f-0efd-4de7-96e7-605ca2a8fc78
spec:
  endpoints:
  - dnsName: cloud.example.com
    recordTTL: 5
    recordType: NS
    targets:
    - gslb-ns-eu-cloud.example.com
    - gslb-ns-us-cloud.example.com
  - dnsName: gslb-ns-eu-cloud.example.com
    recordTTL: 5
    recordType: A
    targets:
    - 172.19.0.6
    - 172.19.0.7

This resource is independent of GSLB resources, but it is still updated on every reconciliation loop, since that is the only chance the controller has to update resources. In the end to end tests we do not know the IP address on which coreDNS is exposed, therefore we abuse the GSLB resource to fetch the IP addresses of the nodes in the cluster (see the update in controllers/providers/dns/external.go). However, this is quite some negative consequences if values are different for different GSLB resources, which happens quite often.

- Flapping affects e2e tests

While trying out chainsaw I tried to increase the parallelism of tests, since I would like to have all e2e tests running simultaneously. This would prevent testing time to grow linearly with the number of strategies or ingress integration.
Unfortunately, having all GSLB resources trying to modify the DNSEndpoint with different values resulted in flaky tests. This DNSEndpoint is important for the tests since it contains the records necessary for cross-cluster communication, if it is not available then K8GB instances on different clusters cannot discover their peers which leads to the following error:

2024-10-27T10:27:36Z WRN github.com/k8gb-io/k8gb/controllers/providers/assistant/gslb.go:255 > can't resolve FQDN using nameservers error="exchange error: all dns servers were tried and none of them were able to resolve, err: dial udp: lookup gslb-ns-us-cloud.example.com on 10.43.0.10:53: no such host" fqdn=localtargets-roundrobin-istio.cloud.example.com. nameservers=[{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":53}]

- Example
* A GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> A DNSEndpoint is created with empty targets, so discovery of other clusters is not yet possible
* The same GSLB has now an IP address assigned -> The DNSEndpoint is updated with target, so discovery is now possible
* A new GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> The DNSEndpoint is updated, since there are no targets the discovery of other clusters is no longer possible

If the timings are unfortunate enough it would be possible that cluster discovery is always unavailable when we reconcile a particular GSLB resource, thus resulting in the advertisement of incorrect targets.

- Solution

This PR proposed to fix this issue by not updating the DNSEndpoint if the list of targets that comes from the gslb resource and is empty. This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

- Deletion problem

Additionaly, the deletion of a GSLB resource was also leading to the deletion of the ExternalDNS resource, even if there were additional GSLB resources still in use. This also led disruption of the cross-cluster communication until the next GSLB resource is reconciled.
This problem is fixed on the finalizer, by deleting the ExternalDNS resource only when the last GSLB resource is deleted.

- TTL flapping

Lastly, even though this didn't affect the e2e tests I noticed that the TTL also flaps, since different GSLB resources may have different TTLs. To stabilize it we can create a new configuration option to set the TTL for the NS and glue record.

Signed-off-by: Andre Aguas <andre.aguas@protonmail.com>
value: {{ quote .Values.k8gb.reconcileRequeueSeconds}}
value: {{ quote .Values.k8gb.reconcileRequeueSeconds }}
- name: NS_RECORD_TTL
value: {{ quote .Values.k8gb.nsRecordTTL }}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@k0da. Yes, the TTL value is an integer

ytsarev
ytsarev previously approved these changes Nov 6, 2024
return err
}

// only remove the DNSEndpoint if there are no more GSLB resourced
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// only remove the DNSEndpoint if there are no more GSLB resourced
// only remove the DNSEndpoint if there are no more GSLB resources

@ytsarev
Copy link
Member

ytsarev commented Nov 6, 2024

This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

Just in case there can be production use cases in on-prem scenario where coreDNS is exposed directly with host network.

@abaguas
Copy link
Collaborator Author

abaguas commented Nov 6, 2024

This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

Just in case there can be production use cases in on-prem scenario where coreDNS is exposed directly with host network.

The current implementation assumes that the GSLB and the coreDNS share the same exposed IPs. Is this true only if coreDNS and the ingress controller be scheduled on the same node?

@ytsarev
Copy link
Member

ytsarev commented Nov 6, 2024

Is this true only if coreDNS and the ingress controller be scheduled on the same node?

I think that's the case; at least, that's how it was originally tested if memory doesn't fail me.

Signed-off-by: Andre Aguas <andre.aguas@protonmail.com>
Copy link
Member

@ytsarev ytsarev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@abaguas abaguas merged commit 31c0d09 into k8gb-io:master Nov 6, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants