Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube on podman driver: failed to start node: RoutableHostIPFromInside #7951

Closed
elegos opened this issue Apr 30, 2020 · 5 comments · Fixed by #7962
Closed

minikube on podman driver: failed to start node: RoutableHostIPFromInside #7951

elegos opened this issue Apr 30, 2020 · 5 comments · Fixed by #7962
Assignees
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@elegos
Copy link
Contributor

elegos commented Apr 30, 2020

OS: Fedora 31 (64 bit)
podman: 1.9.0
minikube: 1.10.0-beta.2

Error description: when I try to start minikube with podman driver, it fails with the following message:

failed to start node: startup failed: Failed to setup kubeconfig: RoutableHostIPFromInside is currently only implemented for docker https://github.com/containers/libpod/issues/5205

I'm not quite sure if that bug is relevant (no mention of RoutableHostIPFromInside).

Steps to reproduce the issue:

  1. minikube start --driver=podman

Full output of failed command:

I0430 20:47:38.200835   20790 start.go:99] hostinfo: {"hostname":"localhost.localdomain","uptime":3230,"bootTime":1588269228,"procs":461,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"31","kernelVersion":"5.5.17-200.fc31.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"5a3e2727-374c-4665-8b07-e67a1fc66448"}
I0430 20:47:38.201846   20790 start.go:109] virtualization: kvm host
😄  minikube v1.10.0-beta.2 on Fedora 31
I0430 20:47:38.202052   20790 notify.go:125] Checking for updates...
I0430 20:47:38.202868   20790 driver.go:253] Setting default libvirt URI to qemu:///system
I0430 20:47:38.256663   20790 podman.go:97] podman version: 1.9.0
✨  Using the podman (experimental) driver based on existing profile
I0430 20:47:38.256727   20790 start.go:206] selected driver: podman
I0430 20:47:38.256732   20790 start.go:579] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.88.0.5 Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0430 20:47:38.256781   20790 start.go:585] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
👍  Starting control plane node minikube in cluster minikube
I0430 20:47:38.256862   20790 cache.go:103] Beginning downloading kic artifacts for podman with docker
I0430 20:47:38.256870   20790 cache.go:115] Driver isn't docker, skipping base-image download
I0430 20:47:38.256878   20790 preload.go:82] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0430 20:47:38.256898   20790 preload.go:97] Found local preload: /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0430 20:47:38.256904   20790 cache.go:47] Caching tarball of preloaded images
I0430 20:47:38.256923   20790 preload.go:123] Found /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0430 20:47:38.256935   20790 cache.go:50] Finished verifying existence of preloaded tar for  v1.18.1 on docker
I0430 20:47:38.257039   20790 profile.go:156] Saving config to /home/elegos/.minikube/profiles/minikube/config.json ...
I0430 20:47:38.257322   20790 cache.go:125] Successfully downloaded all kic artifacts
I0430 20:47:38.257350   20790 start.go:223] acquiring machines lock for minikube: {Name:mk54bbd76b9ba071d84e6139eee3a3cd7ecc36f4 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0430 20:47:38.257561   20790 start.go:227] acquired machines lock for "minikube" in 188.521µs
I0430 20:47:38.257585   20790 start.go:87] Skipping create...Using existing machine configuration
I0430 20:47:38.257595   20790 fix.go:53] fixHost starting: 
I0430 20:47:38.257910   20790 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0430 20:47:38.323748   20790 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0430 20:47:38.323781   20790 fix.go:131] unexpected machine state, will restart: <nil>
🏃  Updating the running podman "minikube" container ...
I0430 20:47:38.324036   20790 machine.go:86] provisioning docker machine ...
I0430 20:47:38.324060   20790 ubuntu.go:166] provisioning hostname "minikube"
I0430 20:47:38.324139   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:38.395907   20790 main.go:110] libmachine: Using SSH client type: native
I0430 20:47:38.396126   20790 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 38305 <nil> <nil>}
I0430 20:47:38.396151   20790 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0430 20:47:38.512325   20790 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0430 20:47:38.512528   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:38.586744   20790 main.go:110] libmachine: Using SSH client type: native
I0430 20:47:38.586942   20790 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 38305 <nil> <nil>}
I0430 20:47:38.586972   20790 main.go:110] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0430 20:47:38.693906   20790 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0430 20:47:38.693933   20790 ubuntu.go:172] set auth options {CertDir:/home/elegos/.minikube CaCertPath:/home/elegos/.minikube/certs/ca.pem CaPrivateKeyPath:/home/elegos/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/elegos/.minikube/machines/server.pem ServerKeyPath:/home/elegos/.minikube/machines/server-key.pem ClientKeyPath:/home/elegos/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/elegos/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/elegos/.minikube}
I0430 20:47:38.693960   20790 ubuntu.go:174] setting up certificates
I0430 20:47:38.693983   20790 provision.go:82] configureAuth start
I0430 20:47:38.694048   20790 cli_runner.go:108] Run: sudo podman inspect -f {{.NetworkSettings.IPAddress}} minikube
I0430 20:47:38.767609   20790 provision.go:131] copyHostCerts
I0430 20:47:38.767669   20790 exec_runner.go:91] found /home/elegos/.minikube/ca.pem, removing ...
I0430 20:47:38.767720   20790 exec_runner.go:98] cp: /home/elegos/.minikube/certs/ca.pem --> /home/elegos/.minikube/ca.pem (1038 bytes)
I0430 20:47:38.767847   20790 exec_runner.go:91] found /home/elegos/.minikube/cert.pem, removing ...
I0430 20:47:38.767886   20790 exec_runner.go:98] cp: /home/elegos/.minikube/certs/cert.pem --> /home/elegos/.minikube/cert.pem (1078 bytes)
I0430 20:47:38.767980   20790 exec_runner.go:91] found /home/elegos/.minikube/key.pem, removing ...
I0430 20:47:38.768014   20790 exec_runner.go:98] cp: /home/elegos/.minikube/certs/key.pem --> /home/elegos/.minikube/key.pem (1675 bytes)
I0430 20:47:38.768089   20790 provision.go:105] generating server cert: /home/elegos/.minikube/machines/server.pem ca-key=/home/elegos/.minikube/certs/ca.pem private-key=/home/elegos/.minikube/certs/ca-key.pem org=elegos.minikube san=[10.88.0.5 localhost 127.0.0.1]
I0430 20:47:38.959958   20790 provision.go:159] copyRemoteCerts
I0430 20:47:38.959998   20790 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0430 20:47:38.960060   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.027797   20790 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:38305 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0430 20:47:39.107272   20790 ssh_runner.go:215] scp /home/elegos/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0430 20:47:39.122118   20790 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server.pem --> /etc/docker/server.pem (1119 bytes)
I0430 20:47:39.135770   20790 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0430 20:47:39.148849   20790 provision.go:85] duration metric: configureAuth took 454.853746ms
I0430 20:47:39.148869   20790 ubuntu.go:190] setting minikube options for container-runtime
I0430 20:47:39.149076   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.219836   20790 main.go:110] libmachine: Using SSH client type: native
I0430 20:47:39.220028   20790 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 38305 <nil> <nil>}
I0430 20:47:39.220046   20790 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0430 20:47:39.330455   20790 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0430 20:47:39.330488   20790 ubuntu.go:71] root file system type: overlay
I0430 20:47:39.330715   20790 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0430 20:47:39.330819   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.403191   20790 main.go:110] libmachine: Using SSH client type: native
I0430 20:47:39.403393   20790 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 38305 <nil> <nil>}
I0430 20:47:39.403530   20790 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0430 20:47:39.517336   20790 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0430 20:47:39.517542   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.589754   20790 main.go:110] libmachine: Using SSH client type: native
I0430 20:47:39.589951   20790 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 38305 <nil> <nil>}
I0430 20:47:39.589984   20790 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0430 20:47:39.702894   20790 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0430 20:47:39.702919   20790 machine.go:89] provisioned docker machine in 1.378872356s
I0430 20:47:39.702932   20790 start.go:186] post-start starting for "minikube" (driver="podman")
I0430 20:47:39.702948   20790 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0430 20:47:39.703004   20790 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0430 20:47:39.703046   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.771776   20790 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:38305 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0430 20:47:39.853532   20790 ssh_runner.go:148] Run: cat /etc/os-release
I0430 20:47:39.855862   20790 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0430 20:47:39.855888   20790 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0430 20:47:39.855905   20790 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0430 20:47:39.855915   20790 info.go:96] Remote host: Ubuntu 19.10
I0430 20:47:39.855927   20790 filesync.go:118] Scanning /home/elegos/.minikube/addons for local assets ...
I0430 20:47:39.855973   20790 filesync.go:118] Scanning /home/elegos/.minikube/files for local assets ...
I0430 20:47:39.855999   20790 start.go:189] post-start completed in 153.049657ms
I0430 20:47:39.856009   20790 fix.go:55] fixHost completed within 1.598415362s
I0430 20:47:39.856017   20790 start.go:74] releasing machines lock for "minikube", held for 1.598441812s
I0430 20:47:39.856084   20790 cli_runner.go:108] Run: sudo podman inspect -f {{.NetworkSettings.IPAddress}} minikube
I0430 20:47:39.929711   20790 profile.go:156] Saving config to /home/elegos/.minikube/profiles/minikube/config.json ...
I0430 20:47:39.929737   20790 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0430 20:47:39.929833   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.930052   20790 ssh_runner.go:148] Run: systemctl --version
I0430 20:47:39.930113   20790 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0430 20:47:39.998212   20790 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:38305 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0430 20:47:40.052608   20790 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:38305 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0430 20:47:40.129672   20790 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0430 20:47:40.137532   20790 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0430 20:47:40.137601   20790 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0430 20:47:40.146227   20790 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0430 20:47:40.185787   20790 ssh_runner.go:148] Run: sudo systemctl start docker
I0430 20:47:40.192807   20790 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
E0430 20:47:40.238167   20790 start.go:94] Unable to get host IP: RoutableHostIPFromInside is currently only implemented for docker https://github.com/containers/libpod/issues/5205
I0430 20:47:40.238310   20790 exit.go:58] WithError(failed to start node)=startup failed: Failed to setup kubeconfig: RoutableHostIPFromInside is currently only implemented for docker https://github.com/containers/libpod/issues/5205 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ad8d68, 0x14, 0x1d989e0, 0xc0007da920)
        /app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ae2860, 0xc0004f41b0, 0x0, 0x1)
        /app/cmd/minikube/cmd/start.go:195 +0x7c4
github.com/spf13/cobra.(*Command).execute(0x2ae2860, 0xc0004f41a0, 0x1, 0x1, 0x2ae2860, 0xc0004f41a0)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2ae18a0, 0x0, 0x1, 0xc00065e040)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:108 +0x6a4
main.main()
        /app/cmd/minikube/main.go:66 +0xea
W0430 20:47:40.238406   20790 out.go:201] failed to start node: startup failed: Failed to setup kubeconfig: RoutableHostIPFromInside is currently only implemented for docker https://github.com/containers/libpod/issues/5205

💣  failed to start node: startup failed: Failed to setup kubeconfig: RoutableHostIPFromInside is currently only implemented for docker https://github.com/containers/libpod/issues/5205

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Thank you very much.

@elegos elegos changed the title [1.10.0-beta.2] minikube on podman: failed to start node: RoutableHostIPFromInside minikube on podman driver: failed to start node: RoutableHostIPFromInside Apr 30, 2020
@medyagh
Copy link
Member

medyagh commented Apr 30, 2020

this is a bug, that could be related to @tstromberg PR

@medyagh medyagh added the kind/bug Categorizes issue or PR as related to a bug. label Apr 30, 2020
@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label May 1, 2020
@afbjorklund afbjorklund added co/podman-driver podman driver issues os/linux labels May 1, 2020
@afbjorklund
Copy link
Collaborator

Not sure what the fix is, for Docker we use the local bridge:

// dockerGatewayIP gets the default gateway ip for the docker bridge on the user's host machine
// gets the ip from user's host docker
func dockerGatewayIP() (net.IP, error) {
        rr, err := runCmd(exec.Command(Docker, "network", "ls", "--filter", "name=bridge", "--format", "{{.ID}}"))
        if err != nil {
                return nil, errors.Wrapf(err, "get network bridge")
        }

        bridgeID := strings.TrimSpace(rr.Stdout.String())
        rr, err = runCmd(exec.Command(Docker, "inspect",
                "--format", "{{(index .IPAM.Config 0).Gateway}}", bridgeID))
        if err != nil {
                return nil, errors.Wrapf(err, "inspect IP bridge network %q.", bridgeID)
        }

        ip := net.ParseIP(strings.TrimSpace(rr.Stdout.String()))
        glog.Infof("got host ip for mount in container by inspect docker network: %s", ip.String())
        return ip, nil
}

Presumably something similar, to get the podman CNI bridge ?

$ sudo podman network ls --filter name=bridge
Error: unknown flag: --filter
$ sudo podman network ls
NAME     VERSION   PLUGINS
podman   0.4.0     bridge,portmap,firewall,tuning

Not sure how to inspect it, though. Poke around in /etc/cni ?

{
  "cniVersion": "0.4.0",
  "name": "podman",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "cni-podman0",
      "isGateway": true,
      "ipMasq": true,
      "ipam": {
        "type": "host-local",
        "routes": [{ "dst": "0.0.0.0/0" }],
        "ranges": [
          [
            {
              "subnet": "10.88.0.0/16",
              "gateway": "10.88.0.1"
            }
          ]
        ]
      }
    }
  ]
}

Or just look at the Gateway of the container maybe ?

            "Gateway": "10.88.0.1",
            "IPAddress": "10.88.0.2",

Could probably do that for docker as well as for podman:

            "Gateway": "172.17.0.1",
            "IPAddress": "172.17.0.2",

@elegos
Copy link
Contributor Author

elegos commented May 1, 2020

We can access the container's gateway via the following command:

podman inspect --format "{{.NetworkSettings.Gateway}}" $CONAINTER_ID_OR_NAME

Thing is: do we have the container's ID or name?

@afbjorklund
Copy link
Collaborator

Thing is: do we have the container's ID or name?

We do.

// RoutableHostIPFromInside returns the ip/dns of the host that container lives on
func RoutableHostIPFromInside(ociBin string, containerName string) (net.IP, error)

@afbjorklund
Copy link
Collaborator

Left the docker hacks in place. Not sure what do for podman-remote, lookup the host name ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants