Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to install pihole bause of resolve errors #88

Open
Cryotize opened this issue Nov 17, 2020 · 34 comments
Open

Unable to install pihole bause of resolve errors #88

Cryotize opened this issue Nov 17, 2020 · 34 comments

Comments

@Cryotize
Copy link

Problem

I followed Jeff Geerlings guide to install pihole, but i can't figure out what the problem is. When trying to install the helm chart, 1 container fails because it can't pull the image.

Events / Logs

Name: pihole-9cf8cd796-6hg94
Namespace: pihole
Priority: 0
Node: slave1/192.168.1.201
Start Time: Tue, 17 Nov 2020 21:14:07 +0000
Labels: app=pihole
pod-template-hash=9cf8cd796
release=pihole
Annotations: checksum.config.adlists: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.blacklist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.dnsmasqConfig: b8db33b1edc0c6d931e44ddb1f551bef2185bdfbad893d40b1c946479abdbfc
checksum.config.regex: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.whitelist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
Status: Pending
IP: 10.42.1.102
IPs:
IP: 10.42.1.102
Controlled By: ReplicaSet/pihole-9cf8cd796
Containers:
pihole:
Container ID:
Image: pihole/pihole:v5.1.2
Image ID:
Ports: 80/TCP, 53/TCP, 53/UDP, 443/TCP, 67/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=10
Readiness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
WEB_PORT: 80
VIRTUAL_HOST: pi.hole
WEBPASSWORD: <set to the key 'password' in secret 'pihole-password'> Optional: false
DNS1: 8.8.8.8
DNS2: 8.8.4.4
Mounts:
/etc/addn-hosts from custom-dnsmasq (rw,path="addn-hosts")
/etc/dnsmasq.d/02-custom.conf from custom-dnsmasq (rw,path="02-custom.conf")
/etc/pihole from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mfw4h (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pihole
ReadOnly: false
custom-dnsmasq:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pihole-custom-dnsmasq
Optional: false
default-token-mfw4h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mfw4h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s

Events:
Type Reason Age From Message

Normal Scheduled default-scheduler Successfully assigned pihole/pihole-9cf8cd796-6hg94 to slave1
Normal Pulling 54s (x3 over 103s) kubelet, slave1 Pulling image "pihole/pihole:v5.1.2"
Warning Failed 48s (x3 over 98s) kubelet, slave1 Failed to pull image "pihole/pihole:v5.1.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pihole/pihole:v5.1.2": failed to resolve reference "docker.io/pihole/pihole:v5.1.2": failed to do request: Head https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2: dial tcp: lookup registry-1.docker.io: Try again
Warning Failed 48s (x3 over 98s) kubelet, slave1 Error: ErrImagePull
Normal BackOff 9s (x5 over 97s) kubelet, slave1 Back-off pulling image "pihole/pihole:v5.1.2"
Warning Failed 9s (x5 over 97s) kubelet, slave1 Error: ImagePullBackOff

nslookup

nslookup https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2
Server: 1.1.1.1
Address: 1.1.1.1#53

** server can't find https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2: NXDOMAIN

curl

curl -I https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:pihole/pihole:pull"
Date: Tue, 17 Nov 2020 21:28:59 GMT
Content-Length: 156
Strict-Transport-Security: max-age=31536000

I hope those outputs help, maybe someone can help me. I have no clue where the problem is.
Any help is appreciated 👍

@brnl
Copy link
Contributor

brnl commented Dec 2, 2020

This should be fixed by commit 34656cb, 2 days ago (chart version 1.7.21). The configured PiHole version (5.1.2) was removed from Docker Hub.

What you could have done yourself is to check hub.docker.io and search for pihole (here's a direct link) and then look at the tags to find the latest tag (version).

Now you have two methods to tell Helm to use the correct version. Let's take "v5.2", as it is the latest tag. Make sure to use the exact tag, so don't forget the 'v' in 'v5.2'.

Via values.yaml

If you choose to use a values.yaml file, add the following:

image:
  tag: v5.2

Via the command line

You can also overwrite the image tag via the command line. When deploying the chart, add --set image.tag=v5.2 to the parameters. This will overwrite anything that's in the values.yaml file.

Hope this helps!

@MoJo2600
Copy link
Owner

MoJo2600 commented Dec 3, 2020

Thanks for the help @brnl - @Cryotize let us know if it is working.

@Cryotize
Copy link
Author

Cryotize commented Dec 5, 2020

Thanks for the help @brnl. Sadly it didn't work out, i am still getting the same error. I have updated the helm chart and tried multiple Versions like latest and v5.2. I tried it with the values.yaml file and the override too.

See this output:

root@master:~# kubectl describe pod pihole-6887d968f7-k54xb -n pihole
Name: pihole-6887d968f7-k54xb
Namespace: pihole
Priority: 0
Node: master/192.168.1.109
Start Time: Sat, 05 Dec 2020 18:28:15 +0000
Labels: app=pihole
pod-template-hash=6887d968f7
release=pihole
Annotations: checksum.config.adlists: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.blacklist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.dnsmasqConfig: 2a6bc223337761be66c6861fa01e4d3437fdf2b67e5f7e5954a7e660dd5ac84
checksum.config.regex: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.whitelist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
Status: Pending
IP: 10.42.0.33
IPs:
IP: 10.42.0.33
Controlled By: ReplicaSet/pihole-6887d968f7
Containers:
pihole:
Container ID:
Image: pihole/pihole:v5.2
Image ID:
Ports: 80/TCP, 53/TCP, 53/UDP, 443/TCP, 67/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=10
Readiness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
WEB_PORT: 80
VIRTUAL_HOST: pi.hole
WEBPASSWORD: <set to the key 'password' in secret 'pihole-password'> Optional: false
DNS1: 8.8.8.8
DNS2: 8.8.4.4
Mounts:
/etc/addn-hosts from custom-dnsmasq (rw,path="addn-hosts")
/etc/dnsmasq.d/02-custom.conf from custom-dnsmasq (rw,path="02-custom.conf")
/etc/pihole from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t55gl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
custom-dnsmasq:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pihole-custom-dnsmasq
Optional: false
default-token-t55gl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t55gl
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled default-scheduler Successfully assigned pihole/pihole-6887d968f7-k54xb to master
Warning Failed 24s (x2 over 53s) kubelet, master Error: ImagePullBackOff
Normal BackOff 24s (x2 over 53s) kubelet, master Back-off pulling image "pihole/pihole:v5.2"
Normal Pulling 10s (x3 over 60s) kubelet, master Pulling image "pihole/pihole:v5.2"
Warning Failed 5s (x3 over 54s) kubelet, master Error: ErrImagePull
Warning Failed 5s (x3 over 54s) kubelet, master Failed to pull image "pihole/pihole:v5.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pihole/pihole:v5.2": failed to resolve reference "docker.io/pihole/pihole:v5.2": failed to do request: Head https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.2: dial tcp: lookup registry-1.docker.io: Try again

@brnl
Copy link
Contributor

brnl commented Dec 5, 2020

@Cryotize I think your DNS is not working, due to the last line in your log:

Head https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.2: dial tcp: lookup registry-1.docker.io: Try again
You might need to troubleshoot that first.

Can you also share your used values? See helm show values.

@Cryotize
Copy link
Author

Cryotize commented Dec 5, 2020

I tried the command, no idea what i'm doing wrong. I tried both release names, for helm and pihole, both did not work. Can you give me the correct syntax? I'm still no to helm and k3s.

See this output:

root@master:# helm install --version '1.7.21' --namespace pihole --values pihole.yaml pihole mojo2600/pihole
NAME: pihole
LAST DEPLOYED: Sat Dec 5 19:07:56 2020
NAMESPACE: pihole
STATUS: deployed
REVISION: 1
root@master:
# helm -n pihole get all 1.7.21
Error: release: not found
root@master:# helm -n pihole get all v5.2
Error: release: not found
root@master:
#

This is my pihole.yaml:

persistentVolumeClaim:
enabled: false
ingress:
enabled: true
serviceTCP:
loadBalancerIP: '192.168.1.111'
type: LoadBalancer
serviceUDP:
loadBalancerIP: '192.168.1.111'
type: LoadBalancer
image:
tag: v5.2
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi

@brnl
Copy link
Contributor

brnl commented Dec 6, 2020

@Cryotize Please post your code blocks between ``` and ``` so the indentation is preserved. You can even specify the language of the codeblock, like ```yaml for yaml coloring. See the Markdown examples and select 'code'. :-) So your pihole.yml looks like this:

persistentVolumeClaim:
  enabled: false
ingress:
  enabled: true
serviceTCP:
  loadBalancerIP: '192.168.1.111'
  type: LoadBalancer
serviceUDP:
  loadBalancerIP: '192.168.1.111'
  type: LoadBalancer
image:
  tag: v5.2
resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

But... For your problem: Can you ping registry-1.docker.io from your kubernetes nodes? Because it looks like that's what it's failing on. Try to SSH into your kubernetes node and just run ping registry-1.docker.io. It will not reply (pinging is disabled by registry-1.docker.io), but it should resolve to an IP address:

ping registry-1.docker.io
PING registry-1.docker.io (52.54.232.21) 56(84) bytes of data.
^C
--- registry-1.docker.io ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2051ms

@Cryotize
Copy link
Author

Cryotize commented Dec 9, 2020

Thanks for your help!

Seems like the DNS is not working correctly, see this output:

ubuntu@master:~$ ping registry-1.docker.io
ping: registry-1.docker.io: Temporary failure in name resolution
ubuntu@master:~$ ping google.com
ping: google.com: Temporary failure in name resolution
ubuntu@master:~$ sudo su
root@master:/home/ubuntu# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
slave1   Ready    <none>   12d   v1.17.5+k3s1
slave2   Ready    <none>   12d   v1.17.5+k3s1
slave3   Ready    <none>   12d   v1.17.5+k3s1
master   Ready    master   12d   v1.17.5+k3s1
root@master:/home/ubuntu# kubectl get pods -n pihole
NAME                      READY   STATUS             RESTARTS   AGE
svclb-pihole-tcp-95xbj    0/3     Pending            0          4d2h
svclb-pihole-tcp-bv7jw    0/3     Pending            0          4d2h
svclb-pihole-tcp-fmqbj    0/3     Pending            0          4d2h
svclb-pihole-tcp-shk2n    0/3     Pending            0          4d2h
svclb-pihole-udp-nh8sl    2/2     Running            2          4d2h
svclb-pihole-udp-9phnq    2/2     Running            2          4d2h
svclb-pihole-udp-6pg9r    2/2     Running            2          4d2h
svclb-pihole-udp-2slkj    2/2     Running            2          4d2h
pihole-6887d968f7-gdwxk   0/1     ImagePullBackOff   0          4d2h
root@master:/home/ubuntu# kubectl delete namespace pihole
namespace "pihole" deleted
root@master:/home/ubuntu# ping registry-1.docker.io
PING registry-1.docker.io (52.1.121.53) 56(84) bytes of data.
^C
--- registry-1.docker.io ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9204ms
root@master:/home/ubuntu#

The weird part is, this only happens when Pihole is installed, it looks like it breaks itself. Any idea why? 🤔

@MoJo2600
Copy link
Owner

MoJo2600 commented Dec 10, 2020

Are you using pihole as the dns server on the kubernetes node you're trying to run pihole? It look to me like this because you are doing all commands on the same machine. This will not work, because you are creating a circular dependency. The kubernetes hosts needs a working DNS to retrieve images from docker, but can't because the dns is not working. Hence ImagePullBackOff. My kubernetes nodes use the default DNS provided by my ISP and all the clients on the network use pihole as dns.

To test if DNS is working you can always use the command dig and specify the dns server you want to ask for an IP. E.g.:

dig @8.8.8.8 google.com
dig @192.168.178.252 google.com

@Cryotize
Copy link
Author

Again, thanks a lot for the help. I disabled systemd-resolved.service, edited /etc/resolv.conf and added 8.8.8.8 as the primary DNS Server. Installing Pihole is working properly now and pinging google.com is successfull.

The bad news is, when i try to access the webpanel, i get the 404 page not found error.
I couldn't figure out why this happens. I have already looked it up and tried it with and without the /admin at the end of the ip address.

Here's the output of sudo kubectl get pods -n pihole

NAME                      READY   STATUS    RESTARTS   AGE
svclb-pihole-tcp-8w2cc    0/3     Pending   0          8m19s
svclb-pihole-tcp-2zbdl    0/3     Pending   0          8m19s
svclb-pihole-tcp-jjqj4    0/3     Pending   0          8m19s
svclb-pihole-tcp-fbnx6    0/3     Pending   0          8m19s
svclb-pihole-udp-2mgdx    2/2     Running   0          8m19s
svclb-pihole-udp-mlg4c    2/2     Running   0          8m19s
svclb-pihole-udp-9j446    2/2     Running   0          8m19s
svclb-pihole-udp-l65rq    2/2     Running   0          8m19s
pihole-7dd45774df-4nhxb   1/1     Running   0          8m19s

The TCP Pods are in the Status Pending, but this is normal according to the Tutorial.
Any more Ideas?

@brnl
Copy link
Contributor

brnl commented Dec 13, 2020

The bad news is, when i try to access the webpanel, i get the 404 page not found error.

Are you sure you are querying the pihole webserver?
Can you show us the output of kubectl get service -n pihole?

@bynicolas
Copy link

I'm having similar issues, I installed my cluster using k3s and k3s comes with coreDNS to manage the cluster's NS, when you install Pi-hole, I'm assuming you get some kind of circular reference scenario and you get DNS issues (just guessing here).

Installing Pi-hole and exposing the node's public IP worked for me. But I wasn't satisfied with that setup.

My particular issue, since I managed to make Pi-hole work, was with all requests coming from the internal node IP and I want requests to come from the actual device making the DNS query. So I'm trying to setup load balancing so I can get better request logs using Traefik. That's where all the fun begins!

So far I'm bumping in all kinds of issues and since I'm new to Kubernetes, Traefik and Pi-hole, the learning curve is pretty much vertical!

I thought I'd share this bit, since maybe the OP is unaware of another DNS service running in his cluster causing the issues.

@Cryotize
Copy link
Author

Cryotize commented Dec 13, 2020

The bad news is, when i try to access the webpanel, i get the 404 page not found error.

Are you sure you are querying the pihole webserver?
Can you show us the output of kubectl get service -n pihole?

Sure, here's the output:

ubuntu@master:~$ sudo kubectl get service -n pihole
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                                   AGE
pihole-tcp   LoadBalancer   10.43.4.128   <pending>       80:31752/TCP,443:31520/TCP,53:30940/TCP   2d17h
pihole-udp   LoadBalancer   10.43.94.0    192.168.1.110   53:30959/UDP,67:32041/UDP                 2d17h
ubuntu@master:~$

The Webserver should be available on 192.168.1.110/admin/, but no success :-/

Also, how can i disable coreDNS with the K3s command? I can't figure it out.

@bynicolas
Copy link

well no, from your output, the http service on which pihole's web server exists is not exposed. the external-ip you have is for the DNS service on the UDP protocol.

You would need an ingress(ingressRoute) config telling traefik (assuming this is what you have) to route to the pihole-tcp service

as far as disabling coreDNS, I think you could start each server with the --disable coredns flag. Also you could install k3s with the --no-deploy coredns flag. But I wouldn't recommend that, since you need to have some kind of dns service so your cluster works properly. The solution to this issue, IMO, is in the right configuration, which I'm still trying to figure out too!

@brnl
Copy link
Contributor

brnl commented Dec 13, 2020

The Webserver should be available on 192.168.1.110/admin/, but no success :-/

No, the DNS server will be available at 192.168.1.110, the webserver will be available at http://<pending>/admin, but that's not gonna work ;-) You can test the DNS server with dig, by the way:
dig github.com @192.168.1.110

@ChristophvdF
Copy link

Hi all
I am very new to the whole kubernetes and helm stuff. I am having a similar problem here but for me the web interface won't start. My cluster consists of 1 Raspberry pi 4 8GB (master) and 3 Raspberry pi 4 1GB (workers)

Output

xxx@ubuntuVM:~$ kubectl get service -n pihole
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
pihole-web       LoadBalancer   10.43.22.182    <pending>         80:31541/TCP,443:31537/TCP   13m
pihole-dns-tcp   LoadBalancer   10.43.254.154   192.168.100.242   53:30453/TCP                 13m
pihole-dns-udp   LoadBalancer   10.43.23.35     192.168.100.243   53:31611/UDP,67:32383/UDP    13m

My yaml file

---
persistentVolumeClaim:
  enabled: true
ingress:
  enabled: true
serviceWeb:
  loadBalancerIP: 192.168.100.240 <-- this is my master node 
  type: LoadBalancer
serviceDns:
  loadBalancerIP: 192.168.100.240
  type: LoadBalancer

Install commands
helm install --values pihole.yaml --namespace pihole pihole mojo2600/pihole

I hope you guys could help me with this.
THX

@MoJo2600
Copy link
Owner

MoJo2600 commented Dec 18, 2020

Can you do a kubectl describe svc pihole-web please?

I think there is an issue with your setup. The EXTERNAL-IP should be the same for the servies. Otherwise it will not work. For DNS to work, at least the ports 53 TCP, 53 UDP and 67 UDP have to have the same IP address. the web port is only there for pihole management.

@ChristophvdF
Copy link

ChristophvdF commented Dec 18, 2020

Hello, thx for your response. I am more than sure that i messed up at some point.

Pihole-Web description

Name:                     pihole-web
Namespace:                pihole
Labels:                   app=pihole
                          app.kubernetes.io/managed-by=Helm
                          chart=pihole-1.8.23
                          heritage=Helm
                          release=pihole
Annotations:              meta.helm.sh/release-name: pihole
                          meta.helm.sh/release-namespace: pihole
Selector:                 app=pihole,release=pihole
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.43.105.246
IPs:                      <none>
IP:                       192.168.100.240
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31593/TCP
Endpoints:                10.42.0.60:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30710/TCP
Endpoints:                10.42.0.60:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31768
Events:                   <none>

Get service output

NAME             TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
pihole-web       LoadBalancer   10.43.105.246   <pending>         80:31593/TCP,443:30710/TCP   95s
pihole-dns-tcp   LoadBalancer   10.43.146.43    192.168.100.240   53:30039/TCP                 96s
pihole-dns-udp   LoadBalancer   10.43.124.14    192.168.100.243   53:31758/UDP,67:31693/UDP    95s

What I forgot to mention. I am also running a Premetheus installaton on the cluster. I followed the tutorial of Jeff Geerling.
As I wanted to install PiHOle I read the description and saw that I need to update my cluster from 1.17 to 1.19. which i did. At least I think I did...

Versoni Output

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5+k3s1", GitCommit:"b11612e2744f39f01bfd208ff97315930c483667", GitTreeState:"clean", BuildDate:"2020-12-11T17:29:41Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/arm"}

Thank you for your support!

@AndyG-0
Copy link
Contributor

AndyG-0 commented Dec 18, 2020

It looks like you are running k3s. K3s by default uses Traefik. This uses up ports 80 and 443 on your nodes unless you explicitly install k3s without it. If you didn't exclude Traefik from being installed I would use a loadbalancer for the dns and then use an ingress for web. This is how I have mine setup.

@ChristophvdF
Copy link

Yes you are right, I am using k3s. I already tried setting up Metallb as loadbalancer, sadly without any success. Again I think I made some configuration errors there... You do not happen to have a tutorial laying around on how to configure traefic and Metallb side by side you could forward me?

Thanks in advance!

@AndyG-0
Copy link
Contributor

AndyG-0 commented Dec 18, 2020

I don't have a tutorial but you should be able to just set the ingress to true in the values.yaml and setup a host. I'm using nip.io for wildcard dns'ing on my local network. https://nip.io/ Then the dns can be set to loadbalancer and you can set which host IP you'd like:

serviceDns:

  type: Loadbalancer
  externalTrafficPolicy: Local
  loadBalancerIP: "192.168.1.100"
    # a fixed LoadBalancer IP
  annotations: {}
    # metallb.universe.tf/address-pool: network-services
    # metallb.universe.tf/allow-shared-ip: pihole-svc

Ingress section:

ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    # virtualHost (default value is pi.hole) will be appended to the hosts
    - pihole-192.168.1.20.nip.io
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #     #- virtualHost (default value is pi.hole) will be appended to the hosts
  #      - chart-example.local

Then point your clients or router toward the ip address of the loadbalancer for dns. The web should come up on the ingress host.

@ChristophvdF
Copy link

ChristophvdF commented Dec 19, 2020

Hello @AndyG-0 I tried your suggestions today but sadly still the same result. Again I am more than sure that I messed up the config again (Still learning)

I installed metallb (using this tutorial ) with this config
metallb

---
configInline:
  address-pools:
    -name: metallb-cluster-pool
    protocol: layer2
    adresses:
      - 192.168.1.0/25

On my router I created a new VLAN for the ip range 192.168.1.0/24 so that metallb can use the rest of the space.

Than installing pihole using the following settings
pihole

---
persistentVolumeClaim:
  enabled: true
ingress:
  enabled: true
  annotations:  
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"  
  path: /
  hosts:
    - pihole-192.168.1.2.nip.io
  tls: 
    - secretName: chart-example.tls
      hosts:
        - chart-example.local

serviceWeb:
  loadBalancerIP: 192.168.100.240
  type: LoadBalancer
serviceDns:
  loadBalancerIP: 192.168.1.0
  type: LoadBalancer
  externalTrafficPolicy: Local
  annotaions:
    metallb-universe.tf/address-pool: network-services
    metallb-universe.tf/allow-shared-ip: pihole-svc

DNS1: "1.1.1.1"
DNS2: "8.8.8.8"

Thank you for your patience!

Btw after watching the kubectl get service -n pihole for a bit I noticed that the extarnal IPs of the tcp and upd services are switching between my nodes. I think not that they should change since clients would not be able to send any requests correctly.

@MoJo2600
Copy link
Owner

If your using k3s i'd recomment to get familiar with Traefik and use this as the ingress. Metallb is working for me (not using k3s), but it seems like the Traefik is better integrated into k3s.

But what i see in your config:

  address-pools:
    -name: metallb-cluster-pool

and

  annotaions:
    metallb-universe.tf/address-pool: network-services

have to match.

and loadBalancerIP: loadBalancerIP: 192.168.1.0 will not work, this is not a valid IP address.

My metallb config:

apiVersion: v1
data:
  config: |
    address-pools:
    - addresses:
      - 192.168.178.60-192.168.178.80
      name: default
      protocol: layer2
    - addresses:
      - 192.168.178.245-192.168.178.254
      name: network-services
      protocol: layer2
kind: ConfigMap

My values.yml (v 0.17.x)

serviceTCP:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

@tomdoherty
Copy link

#101 fixes it for me. when doing an upgrade we need one at least one pod running or DNS breaks

@noenthu
Copy link

noenthu commented Mar 4, 2021

Encountered the same issue with k3s and this helm package.
If resolv.conf isn't changed to point to an upstream dns, the pihole pod keeps returning ErrImagePull.
Tried this by disabling traefik.

@MovieMaker93
Copy link

Same problem here, i'm not able to pull the image. Any news?

@brnl
Copy link
Contributor

brnl commented Mar 29, 2021

@MovieMaker93

Same problem here, i'm not able to pull the image. Any news?

Check the latest version on the docker hub page and adjust the image tag in your values.yml accordingly.

image:
  tag: v5.7

Otherwise, specify your exact problem because there were a lot discussed in this issue. 🤔

@MovieMaker93
Copy link

@MovieMaker93

Same problem here, i'm not able to pull the image. Any news?

Check the latest version on the docker hub page and adjust the image tag in your values.yml accordingly.

image:
  tag: v5.7

Otherwise, specify your exact problem because there were a lot discussed in this issue. 🤔

I'm using k3s and there is my pihole.yaml file:

persistentVolumeClaim:
  enabled: true
ingress:
  enabled: true
serviceTCP:
  loabBalancerIP: '192.168.178.37'
  type: LoadBalancer
serviceUDP:
  loabBalancerIP: '192.168.178.37'
  type: LoadBalancer
resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi
adminPassword: admin

and i'm running the helm charts with this command:

helm install --set image.tag=v5.7 --version '1.7.6' --namespace pihole --values pihole.yaml pihole mojo2600/pihole
but the pods keeps crashing.
How to solve this?
Thanks

@MoJo2600
Copy link
Owner

@MovieMaker93 sorry for the late reply, is it working for you with the latest version?

@appleimperio
Copy link

appleimperio commented Nov 2, 2021

I am having exactly the same issue as the first post. already add the image tag the DNS is working but cant access the admin page

NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                   AGE
pihole-tcp   LoadBalancer   10.43.111.152   <pending>     80:31280/TCP,443:31100/TCP,53:30922/TCP   8m49s
pihole-udp   LoadBalancer   10.43.123.103   10.1.10.5     53:32536/UDP,67:32383/UDP                 8m49s

@MoJo2600
Copy link
Owner

MoJo2600 commented Nov 2, 2021

@appleimperio Could you do a kubectl describe svc pihole-tcp? It seems like it is not getting an external IP. If you do a port forward to the container, is the webfrontend available then? Maybe you could add the log output from the container?

@appleimperio
Copy link

Thanks but I get lost in how many commands I try to fix the problem that I decide to delete the whole cluster and try again.

@luciano-coder
Copy link

luciano-coder commented Mar 18, 2023

I hit this issue using the latest helm chart and running on a k3s rpi cluster (with Traefik, CoreDNS, ServiceLB). I found this issue occurs when serviceDNS type is set to use LoadBalancer as in Jeff Geerling's guides:

serviceDns:
  loadBalancerIP: 192.168.178.252
  type: LoadBalancer

Commenting this out allows the images to be pulled and pi-hole to be deployed. My solution for exposing pi-hole DNS on port 53 was via Traefik as follows:

Pi-hole config...

serviceDns:
  type: ClusterIP

Add Traefik entrypoints via k3s configuration options: (i.e. create /var/lib/rancher/k3s/server/manifests/traefik-config.yaml on master k8s node with contents:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      udp-dns:
        port: 5053
        expose: true
        exposedPort: 53
        protocol: UDP
      tcp-dns:
        port: 5054
        expose: true
        exposedPort: 53
        protocol: TCP

Create IngressRoutes for TCP and UDP:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: pihole-dns-tcp
  namespace: pihole
spec:
  entryPoints:
  - tcp-dns
  routes:
  - match: HostSNI(`*`)
    services:
    - name: pihole-dns-tcp
      namespace: pihole
      port: dns
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteUDP
metadata:
  name: pihole-dns-udp
  namespace: pihole
spec:
  entryPoints:
  - udp-dns
  routes:
  - services:
    - name: pihole-dns-udp
      namespace: pihole
      port: dns-udp

One downside to this approach is that pi-hole only shows the Traefik pods IP in the clients list in the pi-hole UI. I tried to enabled proxyProtocol in the IngressRouteTCP service section but it looks like pi-hole doesn't support it. See https://discourse.pi-hole.net/t/add-proxy-protocol-support-quick-win-doh-dot-dnscrypt-loadbalancing-dns-rulesets-with-dnsdist/28166/32

Hope this helps others.

@i5Js
Copy link

i5Js commented Apr 12, 2023

Same here. I have tried but it's impossible to see the original ips/hostname with traefik. I will go back to nginx as reverse proxy. It's a pitty

@diogosilva30
Copy link

diogosilva30 commented May 20, 2023

Any update on this? I'm new to kubernetes and pihole, and having trouble configuring it.

I'm able to configure pihole with LoadBalancer type and use the node's ip in my router DNS configuration, and I'm able to see that Pihole is working. However, when doing this I'm unable to pull any image within the cluster, always getting the ImagePullBackOff error. If I switch to any other type like NodePort, I'm able to pull images again, but the DNS stops working.
What am I missing here?

I'm using kubernetes with k3s (traefik + coreDns + service lb)
My current values.yaml file:

# -- Configuration for the DNS service on port 53
serviceDns:

  # Set type as "LoadBalancer" so k3s service lb exposes the service
  # externally
  type: LoadBalancer
  # -- The port of the DNS service
  port: 53

diogosilva30 added a commit to diogosilva30/k3s.dsilva.dev that referenced this issue May 20, 2023
- When deploying pihole on port 53 of kubernetes cluster the cluster would fail on any type of request, dns lookup. Turns out the VM configured DNS was "127.0.0.53" (a local target), instead of an upstream DNS server like Cloudflare or Google.

Refs: k3s-io/k3s#4486 MoJo2600/pihole-kubernetes#88
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests