Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus operator pod fails to start #74

Closed
stonecharioteer opened this issue Jul 3, 2020 · 4 comments
Closed

prometheus operator pod fails to start #74

stonecharioteer opened this issue Jul 3, 2020 · 4 comments

Comments

@stonecharioteer
Copy link

stonecharioteer commented Jul 3, 2020

Hi,

I have been trying to run these services on K3s on a 4 node Raspberry Pi 4 Cluster running Raspberry Pi OS 64Bit.

The prometheus operator pod is failing to start, and results in a CrashLoopBackoff.

Here are the logs:

I0703 13:53:53.654293       1 main.go:186] Valid token audiences: 
I0703 13:53:53.654469       1 main.go:232] Generating self signed cert as no cert is provided
I0703 13:53:55.656589       1 main.go:281] Starting TCP socket on :8443
I0703 13:53:55.657421       1 main.go:288] Listening securely on :8443
ts=2020-07-03T13:55:09.510176891Z caller=main.go:217 msg="Starting Prometheus Operator version '0.40.0'."
ts=2020-07-03T13:55:09.527627448Z caller=main.go:104 msg="Starting insecure server on [::]:8080"
ts=2020-07-03T13:55:39.528537746Z caller=main.go:385 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get \"https://10.43.0.1:443/version?timeout=32s\": dial tcp 10.43.0.1:443: i/o timeout"

Could you help me figure out what I am doing wrong?

This also causes the other two pods to fail:

NAME                                   READY   STATUS             RESTARTS   AGE
arm-exporter-dmqk2                     2/2     Running            0          10m
arm-exporter-cf88w                     2/2     Running            0          10m
arm-exporter-rmhfm                     2/2     Running            0          10m
arm-exporter-2s9jx                     2/2     Running            0          10m
node-exporter-xl25c                    2/2     Running            0          10m
node-exporter-gh852                    2/2     Running            0          10m
node-exporter-rflms                    2/2     Running            0          10m
node-exporter-d5w67                    2/2     Running            0          10m
grafana-7bcf47fbcb-8t9sq               1/1     Running            0          10m
prometheus-operator-6b8868d698-7jhwx   1/2     CrashLoopBackOff   6          11m
prometheus-adapter-f78c4f4ff-8rchv     0/1     CrashLoopBackOff   6          10m
kube-state-metrics-96bf99844-894lx     2/3     CrashLoopBackOff   6          10m
@carlosedp
Copy link
Owner

Do you have anything conflicting ports with it?
That appears to be an error trying to talk to the K3s api-server.

@stonecharioteer
Copy link
Author

stonecharioteer commented Jul 4, 2020 via email

@stonecharioteer
Copy link
Author

Here's a dump of sudo netstat -tupln

sudo netstat -tupln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10010         0.0.0.0:*               LISTEN      1161/containerd     
tcp        0      0 0.0.0.0:32190           0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 0.0.0.0:31744           0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 192.168.68.120:9100     0.0.0.0:*               LISTEN      3015/./kube-rbac-pr 
tcp        0      0 127.0.0.1:9100          0.0.0.0:*               LISTEN      2771/node_exporter  
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 0.0.0.0:31568           0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      1117/k3s            
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      540/sshd            
tcp6       0      0 :::5001                 :::*                    LISTEN      12307/docker-proxy  
tcp6       0      0 :::10250                :::*                    LISTEN      1117/k3s            
tcp6       0      0 :::10251                :::*                    LISTEN      1117/k3s            
tcp6       0      0 :::6443                 :::*                    LISTEN      1117/k3s            
tcp6       0      0 :::10252                :::*                    LISTEN      1117/k3s            
tcp6       0      0 :::8080                 :::*                    LISTEN      12499/docker-proxy  
tcp6       0      0 :::22                   :::*                    LISTEN      540/sshd            
udp        0      0 0.0.0.0:68              0.0.0.0:*                           455/dhcpcd          
udp        0      0 0.0.0.0:8472            0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           384/avahi-daemon: r 
udp        0      0 0.0.0.0:42218           0.0.0.0:*                           384/avahi-daemon: r 
udp6       0      0 :::36409                :::*                                384/avahi-daemon: r 
udp6       0      0 :::546                  :::*                                455/dhcpcd          
udp6       0      0 :::5353                 :::*                                384/avahi-daemon: r

@stonecharioteer
Copy link
Author

Ah, looks like a k3s issue. This got fixed the moment I uninstalled docker-ce on my Raspberry Pi 4s.

It has been raised as an issue with K3S.

Thank you for this repo, it is truly amazing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants