Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Get https://10.152.183.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.152.183.1:443: connect: no route to host #854

Closed
shreeram-silwal opened this issue Dec 10, 2019 · 21 comments
Labels
kind/support Question with a workaround

Comments

@shreeram-silwal
Copy link

Please run microk8s.inspect and attach the generated tarball to this issue.

Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster

WARNING: Docker is installed.
Add the following lines to /etc/docker/daemon.json:
{
"insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
Report tarball is at /var/snap/microk8s/1079/inspection-report-20191210_050225.tar.gz

after initializing helm by creating serivce account tiller it successfully deploys tiller pod but helm is not able to communicate with tiller.

helm ls
Error: Get https://10.152.183.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.152.183.1:443: connect: no route to host

alias is added for helm and kubectl
alias helm = 'microk8s.helm'
alias kubectl = 'microk8s.kubectl'

kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9b8997588-6lmzt 0/1 Running 0 18h
tiller-deploy-68cff9d9cb-hgl2f 1/1 Running 0 22h

tiller pod is running without any error

also when enableing dns with microk8s.enable dns the pod is not up as it shows running

logs of coredns:

2019-12-10T04:25:56.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-12-10T04:26:06.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:14.260047       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:16.280384       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:16.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:18.296309       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
E1210 04:26:20.304944       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:20.312509       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:26.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:27.313029       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Namespace: Get https://10.152.183.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:29.336441       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Namespace: Get https://10.152.183.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:36.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-12-10T04:26:46.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
@ktsakalozos
Copy link
Member

Looks like the pod cannot access the kubernetes api. This is usually a network configuration solved with:

sudo iptables -P FORWARD ACCEPT

Please go through the common issues section at https://microk8s.io/docs/troubleshooting#common-issues

@shreeram-silwal
Copy link
Author

i tried adding that rule but it didn't work

@shreeram-silwal
Copy link
Author

@ktsakalozos can you provide me other information to fix the issue?

@ktsakalozos
Copy link
Member

From inside a busybox https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/admin/dns/busybox.yaml you should be able to reach 10.152.183.1. If not there could be a firewall involved? What does microk8s.inspect tell you?

@shreeram-silwal
Copy link
Author

microk8.inspect logsis in the top section of this issue i have included that while creating issue

@shreeram-silwal
Copy link
Author

ufw is inactive and i have already add the iptables rule but the issue is not resolved. form busybox container the ip is not reachable.

@shreeram-silwal
Copy link
Author

once again the microk8s.inspect tell the following:

microk8s.inspect

Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster

Building the report tarball
Report tarball is at /var/snap/microk8s/1079/inspection-report-20191210_094530.tar.gz

@ktsakalozos
Copy link
Member

Can you please share the produced tarball?

@shreeram-silwal
Copy link
Author

inspection-report-20191210_094530.tar.gz
tarball file

@ktsakalozos
Copy link
Member

Can you share your /etc/hosts and route -n. I wonder if IPv6 is stepping in the way.

@shreeram-silwal
Copy link
Author

cat /etc/hosts
127.0.0.1 localhost

The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.1.1 identv identv

route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 ens3
0.0.0.0 10.0.0.1 0.0.0.0 UG 100 0 0 ens3
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens3
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0
10.1.75.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
169.254.0.0 0.0.0.0 255.255.0.0 U 100 0 0 ens3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0

@ktsakalozos
Copy link
Member

ktsakalozos commented Dec 10, 2019

@shreeram-silwal I see this machine is a kvm. How did you create it? I would like to reproduce the setup you have to figure out why this happens.

@shreeram-silwal
Copy link
Author

it's a oracle cloud vm not a kvm virtual machine.

@shreeram-silwal
Copy link
Author

@ktsakalozos as you said it looks like kvm but it's not it's a oracle vm is there anything you found the issue by looking into the tarball file?

@shreeram-silwal
Copy link
Author

I also tried on creating another ubuntu vm on oracle but the same issue occured.

@ktsakalozos
Copy link
Member

I spent some time on Oracle Cloud, here is what is probably biting us.

If you do a sudo iptables -S you will see the the input chain ends with:

-A INPUT -j REJECT --reject-with icmp-host-prohibited

The forward chain starts with:

-A FORWARD -j REJECT --reject-with icmp-host-prohibited

If you remove these two rules you should allow the traffic to flow to the API server.

sudo iptabled -D  INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptabled -D  FORWARD -j REJECT --reject-with icmp-host-prohibited

I do not know much about firewalls but it seems to me these two rules that are there by default work against the default policies:

-P INPUT ACCEPT
-P FORWARD ACCEPT

Here is some info in case you want to create your ingress/egress rules. The pods get IPs in 10.1.0.0/16 and the services get IPs in 10.152.183.1/24.

@shreeram-silwal
Copy link
Author

thanks @ktsakalozos it's working now.

@kunalpuriii
Copy link

kunalpuriii commented Jun 12, 2020

@ktsakalozos I am facing same issue on my on-prem, any help is appreciated
I have freshly installed kubernetes cluster 1.16.0 version. This is a single master cluster that we use for testing and integrate with other network elements. I am facing issues with nginx pod talking to the api intermittently, like this
Error trying to get the default server TLS secret nginx-ingress/default-server-secret: could not get nginx-ingress/default-server-secret: Get "https://192.168.209.1:443/api/v1/namespaces/nginx-ingress/secrets/default-server-secret": dial tcp 192.168.209.1:443: i/o timeout [root@001 ~/kubernetes-ingress/deployments] kubectl logs -n nginx-ingress nginx-ingress-57cdc75bdb-9kdrk
I0612 10:08:40.885600 1 main.go:169] Starting NGINX Ingress controller Version=1.6.3 GitCommit=b9378d56
F0612 10:09:10.893413 1 main.go:275] Error trying to get the default server TLS secret nginx-ingress/default-server-secret: could not get nginx-ingress/default-server-secret: Get https://192.168.209.1:443/api/v1/namespaces/nginx-ingress/secrets/default-server-secret: dial tcp 192.168.209.1:443: i/o timeout

@jpancoast
Copy link

I ran into this issue recently because of firewalld (it adds those two icmp-host-prohibited rules automatically).

Another way to fix it is to turn masquerading on for the default zone:

firewall-cmd --add-masquerade --permanent
systemctl restart firewalld

@jrib
Copy link

jrib commented Sep 12, 2023

Also on oracle cloud, I ended up adding these to my /etc/iptables/rules.v4 before the existing -A INPUT -j REJECT rule:

-A INPUT -i vxlan.calico -j ACCEPT
-A INPUT -i cali+ -j ACCEPT

These mimic the ufw rules described in https://microk8s.io/docs/troubleshooting#heading--common-issues and are a bit more strict than -A INPUT -j ACCEPT.

I also commented out the FORWARD reject rule:

#-A FORWARD -j REJECT --reject-with icmp-host-prohibited

After making those two modifications to /etc/iptables/rules.v4, I ran:

 sudo iptables-restore < /etc/iptables/rules.v4

@avinash-platformatory
Copy link

In my case, I was using 10.1.0.0/16 as my VPC CIDR block and it was conflicting with the pod IPs that microk8s uses

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Question with a workaround
Projects
None yet
Development

No branches or pull requests

6 participants