-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't start minikube after initial install (could not unmarshal the JSON output of 'docker info') #11174
Comments
@SteveBisnett do u mind sharing the output of alternatively I am curious if this flag helps you? is this running inside a VM or inside another container ? if this is running inside a container, one option would be using the |
the original error comes from
another thing to try would be trying the containerd runtime would this help ? |
So I have attempted to start in '--driver=none' since this is a VM and I get the same results. It is as though Docker is not running, despite being able to get a status and running "Hello World". Here is the output of the --container-runtime=containerd command [root@control-plane ~]# minikube start --container-runtime=containerd
X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got
|
Can you post the output of |
[root@control-plane ~]# sudo docker version Server: Docker Engine - Community |
Without the sudo. Something like: $ docker version --format "{{.Server.Os}}-{{.Server.Version}}"
linux-20.10.6 |
I can't. Despite following the instructions on "Manage Docker as a non-root user" found here (https://docs.docker.com/engine/install/linux-postinstall/) it will only respond when I use SUDO. |
minikube is supposed to be able to detect the docker error, so for some reason we get an "OK" error code - but no output ? Possible we need to look out for "" results from |
Here is the output of 'docker info'... Of course with SUDO: [root@control-plane ~]# sudo docker info Server: WARNING: API is accessible on http://127.0.0.1:2375 without encryption. |
We don't use sudo for docker, only for running podman... It is kinda arbitrary, and some people prefer using "sudo docker" over adding their user to a root-equivalent group. But it is a common setup: https://docs.docker.com/engine/install/linux-postinstall/ ( What is the output and exit code of running docker without ? |
Anyway, can't reproduce this. Here is what I get, after downgrading Docker from 20.10 to 19.03: [admin@localhost ~]$ more /etc/redhat-release
CentOS Linux release 8.3.2011
[admin@localhost ~]$ docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:19 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.9
GitCommit: ea765aba0d05254012b0b9e595e995c09186427f
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683 https://docs.docker.com/engine/install/centos/
Here is the expected output, from a non-admin (unprivileged) user: [luser@localhost ~]$ docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
[luser@localhost ~]$ echo $?
1 Running docker requires* user to have admin/docker/root privileges. * except for rootless, which isn't yet supported in minikube |
So, I already executed that command, but when running 'docker info' without sudo it shows this: [root@control-plane ~]# sudo usermod -aG docker $USER |
So you get these issues inside, when you run the commands with And not outside on the host, as part of the verification before running the As you are running as root (and not "docker" $USER) here, it should not be about permissions. Still trying to duplicate. Why is it running as "root", and where did the "control-plane" host come from ? |
I get these when accessing the console directly and logging in as root. Based upon your last posts, I reinstalled Docker and after rebooting the system, I used the sudo -i and attempted to start minikube with the following command: minikube start --driver=none. This time I received a different response, but the cluster still did not start up.... [root@control-plane ~]# minikube start --driver=none
stderr:
X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 ######################################################### Minikube attempted 3 times to access the kublet, but never was successful. It errored out with the following: error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
|
The none driver is very different from the docker driver. For instance, you need to remember to disable SELinux and Firewalld. https://minikube.sigs.k8s.io/docs/drivers/none/ It also doesn't see much testing in CI on Fedora or CentOS, #3552 |
FirewallD is offline and disabled. This is running in a VM and was recommended to run it using the --none driver. Starting with Docker, I am still getting the same errors as before. |
Sure, either should work. Just can be a bit hard to follow when mixing drivers... But this part is a bit strange, makes you wonder what else was modified: If I enable SELinux again ( This is why it is a suspect. Enabling firewalld did get a proper warning message. |
But at least I could reproduce the bug where the none driver sets the hostname... |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Steps to reproduce the issue:
Minikube version: 1.18.1 (need to use this version as AWX has a bug related to 1.19)
Docker version: 19.03.15, build 99e3ed8919
Full output of failed command:
[ansible@control-plane ~]$ minikube start
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.18.0-240.22.1.el8_3.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
stderr:
[WARNING IsDockerSystemdCheck]: detected "" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':
: unexpected end of JSON input
[preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I have verified that docker is running:
[ansible@control-plane ~]$ sudo docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:19 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.18.0
GitCommit: fec3683
[ansible@control-plane ~]$ sudo docker info
Client:
Debug Mode: false
Server:
Containers: 8
Running: 0
Paused: 0
Stopped: 8
Images: 8
Server Version: 19.03.15
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.18.0-240.22.1.el8_3.x86_64
Operating System: CentOS Linux 8
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 15.46GiB
Name: control-plane.minikube.internal
ID: EW3X:QRSM:A5XC:2HFJ:CNQP:2H3K:2TE4:7CJL:XUZJ:E37A:3LMN:35TR
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[ansible@control-plane ~]$ minikube version
minikube version: v1.18.1
commit: 09ee84d
The text was updated successfully, but these errors were encountered: