Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consul DNS and Kubernetes NodeLocalDNS do not work together #21874

Open
JWebDev opened this issue Oct 25, 2024 · 0 comments
Open

Consul DNS and Kubernetes NodeLocalDNS do not work together #21874

JWebDev opened this issue Oct 25, 2024 · 0 comments

Comments

@JWebDev
Copy link

JWebDev commented Oct 25, 2024

Hello,

I have a problem with configuring ConsulDNS in Kubernetes.
I read the documentation: https://developer.hashicorp.com/consul/docs/k8s/dns/enable#coredns-configuration
I add this piece to the end of my ConfigMap. I reload all pods, reload all injected pods. Nothing happens. The consul just doesn't appear in /etc/resolv.conf.

I also have NodeLocalDNS. I think the problem is in it because it is responsible for the DNS on the node where the pod is running.
When I add a piece of code with consul to NodeLocalDNS ConfigMap, the Pod does not start, and I get this error. What am I doing wrong? This is how the config files looks.

Consul DNS Service IP: 10.233.53.111

coredns

.:53 {
    errors {
    }
    health {
        lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    forward . /etc/resolv.conf {
      prefer_udp
      max_concurrent 1000
    }
    cache 30

    loop
    reload
    loadbalance
}
consul {
    errors
    cache 30
    forward . 10.233.53.111
}

nodelocaldns

cluster.local:53 {
    errors
    cache {
        success 9984 30
        denial 9984 5
    }
    reload
    loop
    bind 169.254.25.10
    forward . 10.233.0.3 {
        force_tcp
    }
    prometheus :9253
    health 169.254.25.10:9254
}
consul {
    errors
    cache 30
    forward . 10.233.53.111
}
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.25.10
    forward . 10.233.0.3 {
        force_tcp
    }
    prometheus :9253
}
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.25.10
    forward . 10.233.0.3 {
        force_tcp
    }
    prometheus :9253
}
.:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.25.10
    forward . /etc/resolv.conf
    prometheus :9253
}

System info

Kubernetes 1.29.4
Helm chart App version 1.20
Version 1.6.0
Image: hashicorp/consul-k8s-control-plane:1.6.0

/ $ consul info
agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 0
build:
        prerelease =
        revision = cddc6181
        version = 1.20.0
        version_metadata =
consul:
        acl = disabled
        bootstrap = true
        known_datacenters = 1
        leader = true
        leader_addr = MY_IP:8300
        server = true
raft:
        applied_index = 7120
        commit_index = 7120
        fsm_pending = 0
        last_contact = 0
        last_log_index = 7120
        last_log_term = 4
        last_snapshot_index = 0
        last_snapshot_term = 0
        latest_configuration = [{Suffrage:Voter ID:795bbdb4-cd1f-d2f9-c8eb-34f380a9ffde Address:MY_IP:8300}]
        latest_configuration_index = 0
        num_peers = 0
        protocol_version = 3
        protocol_version_max = 3
        protocol_version_min = 0
        snapshot_version_max = 1
        snapshot_version_min = 0
        state = Leader
        term = 4
runtime:
        arch = amd64
        cpu_count = 12
        goroutines = 330
        max_procs = 12
        os = linux
        version = go1.22.7
serf_lan:
        coordinate_resets = 0
        encrypted = true
        event_queue = 1
        event_time = 2
        failed = 0
        health_score = 0
        intent_queue = 1
        left = 0
        member_time = 6
        members = 1
        query_queue = 0
        query_time = 1
serf_wan:
        coordinate_resets = 0
        encrypted = true
        event_queue = 0
        event_time = 1
        failed = 0
        health_score = 0
        intent_queue = 0
        left = 0
        member_time = 3
        members = 1
        query_queue = 0
        query_time = 1

NodeLocalDNS pod logs after restart

2024/10/25 02:56:02 [INFO] Starting node-cache image: 1.22.28
2024/10/25 02:56:02 [INFO] Using Corefile /etc/coredns/Corefile
2024/10/25 02:56:02 [INFO] Using Pidfile
2024/10/25 02:56:02 [ERROR] Failed to read node-cache coreFile /etc/coredns/Corefile.base - open /etc/coredns/Corefile.base: no such file or directory
2024/10/25 02:56:02 [INFO] Skipping kube-dns configmap sync as no directory was specified
Listen: listen tcp :53: bind: address already in use

Thanks for the help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant