* * ==> Audit <== * |---------|------|----------|------|---------|---------------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|----------|------|---------|---------------------|----------| | start | | minikube | dean | v1.30.1 | 17 May 23 11:58 MDT | | |---------|------|----------|------|---------|---------------------|----------| * * ==> Last Start <== * Log file created at: 2023/05/17 11:58:56 Running on machine: msi-ubuntu22 Binary: Built with gc go1.20.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0517 11:58:56.238230 9407 out.go:296] Setting OutFile to fd 1 ... I0517 11:58:56.238342 9407 out.go:348] isatty.IsTerminal(1) = true I0517 11:58:56.238346 9407 out.go:309] Setting ErrFile to fd 2... I0517 11:58:56.238353 9407 out.go:348] isatty.IsTerminal(2) = true I0517 11:58:56.238469 9407 root.go:336] Updating PATH: /home/dean/.minikube/bin W0517 11:58:56.238568 9407 root.go:312] Error reading config file at /home/dean/.minikube/config/config.json: open /home/dean/.minikube/config/config.json: no such file or directory I0517 11:58:56.238946 9407 out.go:303] Setting JSON to false I0517 11:58:56.240138 9407 start.go:125] hostinfo: {"hostname":"msi-ubuntu22","uptime":8706,"bootTime":1684337631,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.19.0-41-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7b6a85ab-2e53-4430-ab03-7cbe76f4c9c3"} I0517 11:58:56.240179 9407 start.go:135] virtualization: kvm host I0517 11:58:56.241603 9407 out.go:177] ๐Ÿ˜„ minikube v1.30.1 on Ubuntu 22.04 W0517 11:58:56.242895 9407 preload.go:295] Failed to list preload files: open /home/dean/.minikube/cache/preloaded-tarball: no such file or directory I0517 11:58:56.242937 9407 notify.go:220] Checking for updates... I0517 11:58:56.242946 9407 driver.go:375] Setting default libvirt URI to qemu:///system I0517 11:58:56.242963 9407 global.go:111] Querying for installed drivers using PATH=/home/dean/.minikube/bin:/home/dean/anaconda3/condabin:/usr/local/cuda/bin:/home/dean/bin/go1.20.3.linux-amd64/go/bin:/home/dean/src/golang/go3p/bin:/home/dean/bin/jdk-17.0.7/bin:/home/dean/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/home/dean/bin/idea-IU-231.8109.175/bin:/home/dean/bin/apache-maven-3.9.1/bin I0517 11:58:56.262644 9407 docker.go:121] docker version: linux-20.10.21: I0517 11:58:56.262719 9407 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0517 11:58:56.281633 9407 info.go:266] docker info: {ID:RLOT:US6P:CM7X:IONC:TWGK:Y2V5:6JFR:XX6B:TZA2:XIL4:4L7R:TESA Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:33 SystemTime:2023-05-17 11:58:56.275247729 -0600 MDT LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-41-generic OperatingSystem:Ubuntu 22.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33484697600 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:msi-ubuntu22 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0517 11:58:56.281683 9407 docker.go:294] overlay module found I0517 11:58:56.281688 9407 global.go:122] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0517 11:58:56.281777 9407 global.go:122] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0517 11:58:56.287828 9407 global.go:122] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0517 11:58:56.287901 9407 global.go:122] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0517 11:58:56.287957 9407 global.go:122] qemu2 default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0517 11:58:56.287964 9407 global.go:122] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0517 11:58:56.311668 9407 virtualbox.go:136] virtual box version: 7.0.8r156879 I0517 11:58:56.311678 9407 global.go:122] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:7.0.8r156879 } I0517 11:58:56.311772 9407 global.go:122] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0517 11:58:56.311789 9407 driver.go:310] not recommending "none" due to default: false I0517 11:58:56.311792 9407 driver.go:310] not recommending "ssh" due to default: false I0517 11:58:56.311801 9407 driver.go:345] Picked: docker I0517 11:58:56.311806 9407 driver.go:346] Alternatives: [virtualbox none ssh] I0517 11:58:56.311810 9407 driver.go:347] Rejects: [kvm2 podman qemu2 vmware] I0517 11:58:56.313085 9407 out.go:177] โœจ Automatically selected the docker driver. Other choices: virtualbox, none, ssh I0517 11:58:56.313653 9407 start.go:295] selected driver: docker I0517 11:58:56.313656 9407 start.go:870] validating driver "docker" against I0517 11:58:56.313662 9407 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0517 11:58:56.313731 9407 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0517 11:58:56.331579 9407 info.go:266] docker info: {ID:RLOT:US6P:CM7X:IONC:TWGK:Y2V5:6JFR:XX6B:TZA2:XIL4:4L7R:TESA Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:33 SystemTime:2023-05-17 11:58:56.325979806 -0600 MDT LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-41-generic OperatingSystem:Ubuntu 22.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33484697600 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:msi-ubuntu22 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0517 11:58:56.331634 9407 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0517 11:58:56.332250 9407 start_flags.go:386] Using suggested 7900MB memory alloc based on sys=31933MB, container=31933MB I0517 11:58:56.332354 9407 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true] I0517 11:58:56.333020 9407 out.go:177] ๐Ÿ“Œ Using Docker driver with root privileges I0517 11:58:56.333563 9407 cni.go:84] Creating CNI manager for "" I0517 11:58:56.333570 9407 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0517 11:58:56.333574 9407 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0517 11:58:56.333583 9407 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dean:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0517 11:58:56.334222 9407 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0517 11:58:56.335240 9407 cache.go:120] Beginning downloading kic base image for docker with docker I0517 11:58:56.335782 9407 out.go:177] ๐Ÿšœ Pulling base image ... I0517 11:58:56.336793 9407 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0517 11:58:56.336865 9407 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local docker daemon I0517 11:58:56.351504 9407 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 to local cache I0517 11:58:56.351612 9407 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local cache directory I0517 11:58:56.351680 9407 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 to local cache I0517 11:58:56.391331 9407 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 I0517 11:58:56.391344 9407 cache.go:57] Caching tarball of preloaded images I0517 11:58:56.391456 9407 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0517 11:58:56.392302 9407 out.go:177] ๐Ÿ’พ Downloading Kubernetes v1.26.3 preload ... I0517 11:58:56.392979 9407 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ... I0517 11:58:56.510066 9407 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b698631b54adb014b111f0258a79e081 -> /home/dean/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 I0517 11:59:51.281549 9407 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ... I0517 11:59:51.281598 9407 preload.go:256] verifying checksum of /home/dean/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ... I0517 11:59:52.221506 9407 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0517 11:59:52.221741 9407 profile.go:148] Saving config to /home/dean/.minikube/profiles/minikube/config.json ... I0517 11:59:52.221758 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/config.json: {Name:mkedd29adcaf7b54a5e61825fa20395ea80b0dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 11:59:56.116586 9407 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 as a tarball I0517 11:59:56.116637 9407 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 from local cache I0517 12:00:04.231251 9407 cache.go:163] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 from cached tarball I0517 12:00:04.233577 9407 cache.go:193] Successfully downloaded all kic artifacts I0517 12:00:04.240487 9407 start.go:364] acquiring machines lock for minikube: {Name:mk8dc9ffe70e5a9cde7c5922c7fea404da027df3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0517 12:00:04.241299 9407 start.go:368] acquired machines lock for "minikube" in 80.271ยตs I0517 12:00:04.241469 9407 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dean:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0517 12:00:04.241784 9407 start.go:125] createHost starting for "" (driver="docker") I0517 12:00:04.245320 9407 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=7900MB) ... I0517 12:00:04.249046 9407 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0517 12:00:04.249217 9407 client.go:168] LocalClient.Create starting I0517 12:00:04.249443 9407 main.go:141] libmachine: Creating CA: /home/dean/.minikube/certs/ca.pem I0517 12:00:04.337913 9407 main.go:141] libmachine: Creating client certificate: /home/dean/.minikube/certs/cert.pem I0517 12:00:04.795270 9407 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0517 12:00:04.809847 9407 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0517 12:00:04.809898 9407 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0517 12:00:04.810041 9407 cli_runner.go:164] Run: docker network inspect minikube W0517 12:00:04.821636 9407 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0517 12:00:04.821649 9407 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0517 12:00:04.821655 9407 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0517 12:00:04.821700 9407 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0517 12:00:04.834309 9407 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc01bc3a210} I0517 12:00:04.834704 9407 network_create.go:123] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0517 12:00:04.834922 9407 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0517 12:00:04.959942 9407 network_create.go:107] docker network minikube 192.168.49.0/24 created I0517 12:00:04.961093 9407 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0517 12:00:04.961549 9407 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0517 12:00:04.999235 9407 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0517 12:00:05.014027 9407 oci.go:103] Successfully created a docker volume minikube I0517 12:00:05.014086 9407 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -d /var/lib I0517 12:00:05.538082 9407 oci.go:107] Successfully prepared a docker volume minikube I0517 12:00:05.538100 9407 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0517 12:00:05.538117 9407 kic.go:190] Starting extracting preloaded images to volume ... I0517 12:00:05.538178 9407 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/dean/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir I0517 12:00:07.932048 9407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/dean/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir: (2.39383983s) I0517 12:00:07.932061 9407 kic.go:199] duration metric: took 2.393942 seconds to extract preloaded images to volume W0517 12:00:07.932258 9407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0517 12:00:07.932294 9407 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0517 12:00:07.932333 9407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0517 12:00:07.950399 9407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=7900mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 I0517 12:00:08.371375 9407 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0517 12:00:08.387193 9407 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0517 12:00:08.402089 9407 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0517 12:00:08.464194 9407 oci.go:144] the created container "minikube" has a running status. I0517 12:00:08.464206 9407 kic.go:221] Creating ssh key for kic: /home/dean/.minikube/machines/minikube/id_rsa... I0517 12:00:08.521614 9407 kic_runner.go:191] docker (temp): /home/dean/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0517 12:00:08.560357 9407 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0517 12:00:08.578748 9407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0517 12:00:08.578759 9407 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0517 12:00:08.614873 9407 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0517 12:00:08.628787 9407 machine.go:88] provisioning docker machine ... I0517 12:00:08.628931 9407 ubuntu.go:169] provisioning hostname "minikube" I0517 12:00:08.628988 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:08.645146 9407 main.go:141] libmachine: Using SSH client type: native I0517 12:00:08.645511 9407 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 49157 } I0517 12:00:08.645519 9407 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0517 12:00:08.800310 9407 main.go:141] libmachine: SSH cmd err, output: : minikube I0517 12:00:08.800607 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:08.819140 9407 main.go:141] libmachine: Using SSH client type: native I0517 12:00:08.819719 9407 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 49157 } I0517 12:00:08.819737 9407 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0517 12:00:08.941052 9407 main.go:141] libmachine: SSH cmd err, output: : I0517 12:00:08.941066 9407 ubuntu.go:175] set auth options {CertDir:/home/dean/.minikube CaCertPath:/home/dean/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dean/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dean/.minikube/machines/server.pem ServerKeyPath:/home/dean/.minikube/machines/server-key.pem ClientKeyPath:/home/dean/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dean/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dean/.minikube} I0517 12:00:08.941082 9407 ubuntu.go:177] setting up certificates I0517 12:00:08.941088 9407 provision.go:83] configureAuth start I0517 12:00:08.941160 9407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0517 12:00:08.958937 9407 provision.go:138] copyHostCerts I0517 12:00:08.958989 9407 exec_runner.go:151] cp: /home/dean/.minikube/certs/ca.pem --> /home/dean/.minikube/ca.pem (1070 bytes) I0517 12:00:08.959103 9407 exec_runner.go:151] cp: /home/dean/.minikube/certs/cert.pem --> /home/dean/.minikube/cert.pem (1115 bytes) I0517 12:00:08.959170 9407 exec_runner.go:151] cp: /home/dean/.minikube/certs/key.pem --> /home/dean/.minikube/key.pem (1675 bytes) I0517 12:00:08.959227 9407 provision.go:112] generating server cert: /home/dean/.minikube/machines/server.pem ca-key=/home/dean/.minikube/certs/ca.pem private-key=/home/dean/.minikube/certs/ca-key.pem org=dean.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0517 12:00:09.118224 9407 provision.go:172] copyRemoteCerts I0517 12:00:09.118684 9407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0517 12:00:09.118718 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:09.133598 9407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dean/.minikube/machines/minikube/id_rsa Username:docker} I0517 12:00:09.219625 9407 ssh_runner.go:362] scp /home/dean/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0517 12:00:09.232789 9407 ssh_runner.go:362] scp /home/dean/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0517 12:00:09.245227 9407 ssh_runner.go:362] scp /home/dean/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0517 12:00:09.259111 9407 provision.go:86] duration metric: configureAuth took 318.01603ms I0517 12:00:09.259122 9407 ubuntu.go:193] setting minikube options for container-runtime I0517 12:00:09.259356 9407 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0517 12:00:09.259404 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:09.274023 9407 main.go:141] libmachine: Using SSH client type: native I0517 12:00:09.274351 9407 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 49157 } I0517 12:00:09.274356 9407 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0517 12:00:09.454196 9407 main.go:141] libmachine: SSH cmd err, output: : overlay I0517 12:00:09.454234 9407 ubuntu.go:71] root file system type: overlay I0517 12:00:09.455396 9407 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0517 12:00:09.455700 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:09.505860 9407 main.go:141] libmachine: Using SSH client type: native I0517 12:00:09.506312 9407 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 49157 } I0517 12:00:09.506375 9407 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0517 12:00:09.697031 9407 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0517 12:00:09.697969 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:09.717893 9407 main.go:141] libmachine: Using SSH client type: native I0517 12:00:09.718237 9407 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 49157 } I0517 12:00:09.718284 9407 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0517 12:00:11.376480 9407 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-05-17 18:00:09.694227009 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0517 12:00:11.376495 9407 machine.go:91] provisioned docker machine in 2.747700934s I0517 12:00:11.376501 9407 client.go:171] LocalClient.Create took 7.127280215s I0517 12:00:11.376511 9407 start.go:167] duration metric: libmachine.API.Create for "minikube" took 7.127466455s I0517 12:00:11.376519 9407 start.go:300] post-start starting for "minikube" (driver="docker") I0517 12:00:11.376523 9407 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0517 12:00:11.376574 9407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0517 12:00:11.376606 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:11.391878 9407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dean/.minikube/machines/minikube/id_rsa Username:docker} I0517 12:00:11.515326 9407 ssh_runner.go:195] Run: cat /etc/os-release I0517 12:00:11.523606 9407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0517 12:00:11.523646 9407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0517 12:00:11.523673 9407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0517 12:00:11.523688 9407 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0517 12:00:11.523711 9407 filesync.go:126] Scanning /home/dean/.minikube/addons for local assets ... I0517 12:00:11.523888 9407 filesync.go:126] Scanning /home/dean/.minikube/files for local assets ... I0517 12:00:11.523957 9407 start.go:303] post-start completed in 147.427545ms I0517 12:00:11.524856 9407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0517 12:00:11.543481 9407 profile.go:148] Saving config to /home/dean/.minikube/profiles/minikube/config.json ... I0517 12:00:11.543706 9407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0517 12:00:11.543740 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:11.559188 9407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dean/.minikube/machines/minikube/id_rsa Username:docker} I0517 12:00:11.674863 9407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0517 12:00:11.696403 9407 start.go:128] duration metric: createHost completed in 7.454582144s I0517 12:00:11.696631 9407 start.go:83] releasing machines lock for "minikube", held for 7.455306474s I0517 12:00:11.697029 9407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0517 12:00:11.742930 9407 ssh_runner.go:195] Run: cat /version.json I0517 12:00:11.742977 9407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0517 12:00:11.742986 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:11.743371 9407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0517 12:00:11.759720 9407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dean/.minikube/machines/minikube/id_rsa Username:docker} I0517 12:00:11.759888 9407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dean/.minikube/machines/minikube/id_rsa Username:docker} I0517 12:00:12.103426 9407 ssh_runner.go:195] Run: systemctl --version I0517 12:00:12.113216 9407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0517 12:00:12.118115 9407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0517 12:00:12.207210 9407 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0517 12:00:12.207420 9407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0517 12:00:12.248932 9407 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0517 12:00:12.248965 9407 start.go:481] detecting cgroup driver to use... I0517 12:00:12.249025 9407 detect.go:199] detected "systemd" cgroup driver on host os I0517 12:00:12.249224 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0517 12:00:12.289678 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0517 12:00:12.313945 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0517 12:00:12.338576 9407 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0517 12:00:12.338734 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0517 12:00:12.363285 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0517 12:00:12.387508 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0517 12:00:12.411865 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0517 12:00:12.436544 9407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0517 12:00:12.459281 9407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0517 12:00:12.483671 9407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0517 12:00:12.504666 9407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0517 12:00:12.525814 9407 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0517 12:00:12.673918 9407 ssh_runner.go:195] Run: sudo systemctl restart containerd I0517 12:00:12.730901 9407 start.go:481] detecting cgroup driver to use... I0517 12:00:12.730925 9407 detect.go:199] detected "systemd" cgroup driver on host os I0517 12:00:12.730978 9407 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0517 12:00:12.737817 9407 cruntime.go:276] skipping containerd shutdown because we are bound to it I0517 12:00:12.737860 9407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0517 12:00:12.744600 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0517 12:00:12.753273 9407 ssh_runner.go:195] Run: which cri-dockerd I0517 12:00:12.755081 9407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0517 12:00:12.759875 9407 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0517 12:00:12.768297 9407 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0517 12:00:12.803557 9407 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0517 12:00:12.839497 9407 docker.go:538] configuring docker to use "systemd" as cgroup driver... I0517 12:00:12.839508 9407 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0517 12:00:12.848047 9407 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0517 12:00:12.885580 9407 ssh_runner.go:195] Run: sudo systemctl restart docker I0517 12:00:15.154105 9407 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.268506312s) I0517 12:00:15.154163 9407 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0517 12:00:15.265767 9407 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0517 12:00:15.399030 9407 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0517 12:00:15.438272 9407 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0517 12:00:15.474049 9407 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0517 12:00:15.481743 9407 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0517 12:00:15.522402 9407 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0517 12:00:15.561423 9407 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0517 12:00:15.561499 9407 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0517 12:00:15.563648 9407 start.go:549] Will wait 60s for crictl version I0517 12:00:15.563691 9407 ssh_runner.go:195] Run: which crictl I0517 12:00:15.566143 9407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0517 12:00:15.586075 9407 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0517 12:00:15.586122 9407 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0517 12:00:15.600566 9407 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0517 12:00:15.616559 9407 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0517 12:00:15.616630 9407 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0517 12:00:15.630042 9407 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0517 12:00:15.632189 9407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0517 12:00:15.638581 9407 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0517 12:00:15.638636 9407 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0517 12:00:15.650292 9407 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0517 12:00:15.650301 9407 docker.go:569] Images already preloaded, skipping extraction I0517 12:00:15.650346 9407 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0517 12:00:15.662055 9407 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0517 12:00:15.662063 9407 cache_images.go:84] Images are preloaded, skipping loading I0517 12:00:15.662104 9407 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0517 12:00:15.676781 9407 cni.go:84] Creating CNI manager for "" I0517 12:00:15.676813 9407 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0517 12:00:15.677051 9407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0517 12:00:15.677205 9407 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0517 12:00:15.677468 9407 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0517 12:00:15.677774 9407 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0517 12:00:15.677823 9407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0517 12:00:15.683307 9407 binaries.go:44] Found k8s binaries, skipping transfer I0517 12:00:15.683344 9407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0517 12:00:15.687706 9407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0517 12:00:15.696440 9407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0517 12:00:15.704853 9407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes) I0517 12:00:15.713218 9407 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0517 12:00:15.714969 9407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0517 12:00:15.720680 9407 certs.go:56] Setting up /home/dean/.minikube/profiles/minikube for IP: 192.168.49.2 I0517 12:00:15.720696 9407 certs.go:186] acquiring lock for shared ca certs: {Name:mk87c3567c44449908ed5e84f83353ad4393691e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:15.720794 9407 certs.go:200] generating minikubeCA CA: /home/dean/.minikube/ca.key I0517 12:00:15.884946 9407 crypto.go:156] Writing cert to /home/dean/.minikube/ca.crt ... I0517 12:00:15.884981 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/ca.crt: {Name:mke1bd894ed7e05bcc6a1e6ce9b744f73dda348c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:15.885381 9407 crypto.go:164] Writing key to /home/dean/.minikube/ca.key ... I0517 12:00:15.885401 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/ca.key: {Name:mk4ea2fa2539c931074409ce1a5a72257628ebce Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:15.885669 9407 certs.go:200] generating proxyClientCA CA: /home/dean/.minikube/proxy-client-ca.key I0517 12:00:16.112292 9407 crypto.go:156] Writing cert to /home/dean/.minikube/proxy-client-ca.crt ... I0517 12:00:16.112299 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/proxy-client-ca.crt: {Name:mkb5c669cfd07504c4136db9e0f720d509854f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.112408 9407 crypto.go:164] Writing key to /home/dean/.minikube/proxy-client-ca.key ... I0517 12:00:16.112412 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/proxy-client-ca.key: {Name:mkf1bb247fbe993c310256433e06eac1f67a2e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.112487 9407 certs.go:315] generating minikube-user signed cert: /home/dean/.minikube/profiles/minikube/client.key I0517 12:00:16.112493 9407 crypto.go:68] Generating cert /home/dean/.minikube/profiles/minikube/client.crt with IP's: [] I0517 12:00:16.171021 9407 crypto.go:156] Writing cert to /home/dean/.minikube/profiles/minikube/client.crt ... I0517 12:00:16.171028 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/client.crt: {Name:mkc7bdbea72299e106439d828e474f37849db545 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.171130 9407 crypto.go:164] Writing key to /home/dean/.minikube/profiles/minikube/client.key ... I0517 12:00:16.171135 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/client.key: {Name:mk9efe6894651e1d2674817ce97ff14b7a07e8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.171184 9407 certs.go:315] generating minikube signed cert: /home/dean/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0517 12:00:16.171195 9407 crypto.go:68] Generating cert /home/dean/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0517 12:00:16.353726 9407 crypto.go:156] Writing cert to /home/dean/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0517 12:00:16.353734 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk7e3bf9a4dfb80f9f9d2325011db12715749a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.353832 9407 crypto.go:164] Writing key to /home/dean/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0517 12:00:16.353836 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk87dc9dd81a966df8842945cb13224056113757 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.353906 9407 certs.go:333] copying /home/dean/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/dean/.minikube/profiles/minikube/apiserver.crt I0517 12:00:16.353972 9407 certs.go:337] copying /home/dean/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/dean/.minikube/profiles/minikube/apiserver.key I0517 12:00:16.354002 9407 certs.go:315] generating aggregator signed cert: /home/dean/.minikube/profiles/minikube/proxy-client.key I0517 12:00:16.354012 9407 crypto.go:68] Generating cert /home/dean/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0517 12:00:16.528030 9407 crypto.go:156] Writing cert to /home/dean/.minikube/profiles/minikube/proxy-client.crt ... I0517 12:00:16.528037 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/proxy-client.crt: {Name:mk425158210b0ec02227b1c60c6ce3637e107536 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.528120 9407 crypto.go:164] Writing key to /home/dean/.minikube/profiles/minikube/proxy-client.key ... I0517 12:00:16.528124 9407 lock.go:35] WriteFile acquiring /home/dean/.minikube/profiles/minikube/proxy-client.key: {Name:mkbc0fa2209a5aad8372c433b652d3cf9bb7ac71 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0517 12:00:16.528226 9407 certs.go:401] found cert: /home/dean/.minikube/certs/home/dean/.minikube/certs/ca-key.pem (1675 bytes) I0517 12:00:16.528244 9407 certs.go:401] found cert: /home/dean/.minikube/certs/home/dean/.minikube/certs/ca.pem (1070 bytes) I0517 12:00:16.528265 9407 certs.go:401] found cert: /home/dean/.minikube/certs/home/dean/.minikube/certs/cert.pem (1115 bytes) I0517 12:00:16.528281 9407 certs.go:401] found cert: /home/dean/.minikube/certs/home/dean/.minikube/certs/key.pem (1675 bytes) I0517 12:00:16.544509 9407 ssh_runner.go:362] scp /home/dean/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0517 12:00:16.570940 9407 ssh_runner.go:362] scp /home/dean/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0517 12:00:16.582901 9407 ssh_runner.go:362] scp /home/dean/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0517 12:00:16.598044 9407 ssh_runner.go:362] scp /home/dean/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0517 12:00:16.618087 9407 ssh_runner.go:362] scp /home/dean/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0517 12:00:16.644918 9407 ssh_runner.go:362] scp /home/dean/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0517 12:00:16.673466 9407 ssh_runner.go:362] scp /home/dean/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0517 12:00:16.701589 9407 ssh_runner.go:362] scp /home/dean/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0517 12:00:16.736428 9407 ssh_runner.go:362] scp /home/dean/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0517 12:00:16.772756 9407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0517 12:00:16.798793 9407 ssh_runner.go:195] Run: openssl version I0517 12:00:16.837358 9407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0517 12:00:16.866542 9407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0517 12:00:16.871298 9407 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 May 17 18:00 /usr/share/ca-certificates/minikubeCA.pem I0517 12:00:16.871368 9407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0517 12:00:16.878899 9407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0517 12:00:16.884827 9407 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dean:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0517 12:00:16.884900 9407 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0517 12:00:16.895704 9407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0517 12:00:16.900251 9407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0517 12:00:16.904369 9407 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0517 12:00:16.904404 9407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0517 12:00:16.908587 9407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0517 12:00:16.908604 9407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0517 12:00:16.976672 9407 kubeadm.go:322] W0517 18:00:16.976226 1311 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0517 12:00:17.004175 9407 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 I0517 12:00:17.088006 9407 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0517 12:02:14.269256 9407 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster I0517 12:02:14.269838 9407 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher I0517 12:02:14.279896 9407 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0517 12:02:14.280039 9407 kubeadm.go:322] [preflight] Running pre-flight checks I0517 12:02:14.280377 9407 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification: I0517 12:02:14.280572 9407 kubeadm.go:322] KERNEL_VERSION: 5.19.0-41-generic I0517 12:02:14.280685 9407 kubeadm.go:322] OS: Linux I0517 12:02:14.280893 9407 kubeadm.go:322] CGROUPS_CPU: enabled I0517 12:02:14.281069 9407 kubeadm.go:322] CGROUPS_CPUSET: enabled I0517 12:02:14.281225 9407 kubeadm.go:322] CGROUPS_DEVICES: enabled I0517 12:02:14.281432 9407 kubeadm.go:322] CGROUPS_FREEZER: enabled I0517 12:02:14.281609 9407 kubeadm.go:322] CGROUPS_MEMORY: enabled I0517 12:02:14.281708 9407 kubeadm.go:322] CGROUPS_PIDS: enabled I0517 12:02:14.281822 9407 kubeadm.go:322] CGROUPS_HUGETLB: enabled I0517 12:02:14.281932 9407 kubeadm.go:322] CGROUPS_IO: enabled I0517 12:02:14.282119 9407 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0517 12:02:14.282387 9407 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0517 12:02:14.282644 9407 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0517 12:02:14.282817 9407 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0517 12:02:14.293576 9407 out.go:204] โ–ช Generating certificates and keys ... I0517 12:02:14.293663 9407 kubeadm.go:322] [certs] Using existing ca certificate authority I0517 12:02:14.293717 9407 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0517 12:02:14.293783 9407 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0517 12:02:14.293834 9407 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0517 12:02:14.293881 9407 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0517 12:02:14.293920 9407 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0517 12:02:14.293971 9407 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0517 12:02:14.294062 9407 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0517 12:02:14.294104 9407 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0517 12:02:14.294194 9407 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0517 12:02:14.294246 9407 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0517 12:02:14.294307 9407 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0517 12:02:14.294345 9407 kubeadm.go:322] [certs] Generating "sa" key and public key I0517 12:02:14.294392 9407 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0517 12:02:14.294432 9407 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0517 12:02:14.294483 9407 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0517 12:02:14.294537 9407 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0517 12:02:14.294583 9407 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0517 12:02:14.294663 9407 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0517 12:02:14.294730 9407 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0517 12:02:14.294762 9407 kubeadm.go:322] [kubelet-start] Starting the kubelet I0517 12:02:14.294814 9407 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0517 12:02:14.295480 9407 out.go:204] โ–ช Booting up control plane ... I0517 12:02:14.295552 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0517 12:02:14.295619 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0517 12:02:14.295674 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0517 12:02:14.295738 9407 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0517 12:02:14.295865 9407 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0517 12:02:14.295905 9407 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed. I0517 12:02:14.295961 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:14.296116 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:02:14.296170 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:14.296325 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:02:14.296382 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:14.296544 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:02:14.296599 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:14.296749 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:02:14.296802 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:14.296979 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:02:14.296982 9407 kubeadm.go:322] I0517 12:02:14.297014 9407 kubeadm.go:322] Unfortunately, an error has occurred: I0517 12:02:14.297044 9407 kubeadm.go:322] timed out waiting for the condition I0517 12:02:14.297047 9407 kubeadm.go:322] I0517 12:02:14.297074 9407 kubeadm.go:322] This error is likely caused by: I0517 12:02:14.297098 9407 kubeadm.go:322] - The kubelet is not running I0517 12:02:14.297184 9407 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) I0517 12:02:14.297187 9407 kubeadm.go:322] I0517 12:02:14.297270 9407 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: I0517 12:02:14.297298 9407 kubeadm.go:322] - 'systemctl status kubelet' I0517 12:02:14.297327 9407 kubeadm.go:322] - 'journalctl -xeu kubelet' I0517 12:02:14.297329 9407 kubeadm.go:322] I0517 12:02:14.297410 9407 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime. I0517 12:02:14.297475 9407 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI. I0517 12:02:14.297542 9407 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl: I0517 12:02:14.297621 9407 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' I0517 12:02:14.297679 9407 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with: I0517 12:02:14.297766 9407 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' W0517 12:02:14.297926 9407 out.go:239] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.19.0-41-generic OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled CGROUPS_IO: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0517 18:00:16.976226 1311 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0517 12:02:14.297996 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0517 12:02:16.034932 9407 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.73692427s) I0517 12:02:16.034978 9407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0517 12:02:16.041107 9407 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0517 12:02:16.041145 9407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0517 12:02:16.045232 9407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0517 12:02:16.045249 9407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0517 12:02:16.071893 9407 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0517 12:02:16.071924 9407 kubeadm.go:322] [preflight] Running pre-flight checks I0517 12:02:16.093546 9407 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification: I0517 12:02:16.093589 9407 kubeadm.go:322] KERNEL_VERSION: 5.19.0-41-generic I0517 12:02:16.093612 9407 kubeadm.go:322] OS: Linux I0517 12:02:16.093643 9407 kubeadm.go:322] CGROUPS_CPU: enabled I0517 12:02:16.093674 9407 kubeadm.go:322] CGROUPS_CPUSET: enabled I0517 12:02:16.093706 9407 kubeadm.go:322] CGROUPS_DEVICES: enabled I0517 12:02:16.093738 9407 kubeadm.go:322] CGROUPS_FREEZER: enabled I0517 12:02:16.093770 9407 kubeadm.go:322] CGROUPS_MEMORY: enabled I0517 12:02:16.093800 9407 kubeadm.go:322] CGROUPS_PIDS: enabled I0517 12:02:16.093832 9407 kubeadm.go:322] CGROUPS_HUGETLB: enabled I0517 12:02:16.093870 9407 kubeadm.go:322] CGROUPS_IO: enabled I0517 12:02:16.133076 9407 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0517 12:02:16.133147 9407 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0517 12:02:16.133218 9407 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0517 12:02:16.210008 9407 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0517 12:02:16.210854 9407 out.go:204] โ–ช Generating certificates and keys ... I0517 12:02:16.210925 9407 kubeadm.go:322] [certs] Using existing ca certificate authority I0517 12:02:16.210974 9407 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0517 12:02:16.211024 9407 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk I0517 12:02:16.211066 9407 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority I0517 12:02:16.211113 9407 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk I0517 12:02:16.211158 9407 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority I0517 12:02:16.211205 9407 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk I0517 12:02:16.211249 9407 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk I0517 12:02:16.211306 9407 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk I0517 12:02:16.211375 9407 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk I0517 12:02:16.211403 9407 kubeadm.go:322] [certs] Using the existing "sa" key I0517 12:02:16.211439 9407 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0517 12:02:16.410725 9407 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0517 12:02:16.491300 9407 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0517 12:02:16.589541 9407 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0517 12:02:16.995991 9407 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0517 12:02:17.004403 9407 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0517 12:02:17.004924 9407 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0517 12:02:17.004955 9407 kubeadm.go:322] [kubelet-start] Starting the kubelet I0517 12:02:17.078035 9407 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0517 12:02:17.087807 9407 out.go:204] โ–ช Booting up control plane ... I0517 12:02:17.087918 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0517 12:02:17.087989 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0517 12:02:17.088044 9407 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0517 12:02:17.088107 9407 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0517 12:02:17.088240 9407 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0517 12:02:57.085630 9407 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed. I0517 12:02:57.087512 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:02:57.088984 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:03:02.090447 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:03:02.091943 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:03:12.092102 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:03:12.093634 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:03:32.092045 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:03:32.092188 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:04:12.092802 9407 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy. I0517 12:04:12.092982 9407 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. I0517 12:04:12.092985 9407 kubeadm.go:322] I0517 12:04:12.093012 9407 kubeadm.go:322] Unfortunately, an error has occurred: I0517 12:04:12.093037 9407 kubeadm.go:322] timed out waiting for the condition I0517 12:04:12.093040 9407 kubeadm.go:322] I0517 12:04:12.093062 9407 kubeadm.go:322] This error is likely caused by: I0517 12:04:12.093094 9407 kubeadm.go:322] - The kubelet is not running I0517 12:04:12.093165 9407 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) I0517 12:04:12.093167 9407 kubeadm.go:322] I0517 12:04:12.093236 9407 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: I0517 12:04:12.093257 9407 kubeadm.go:322] - 'systemctl status kubelet' I0517 12:04:12.093277 9407 kubeadm.go:322] - 'journalctl -xeu kubelet' I0517 12:04:12.093279 9407 kubeadm.go:322] I0517 12:04:12.093349 9407 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime. I0517 12:04:12.093403 9407 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI. I0517 12:04:12.093471 9407 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl: I0517 12:04:12.093548 9407 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' I0517 12:04:12.093603 9407 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with: I0517 12:04:12.093657 9407 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' I0517 12:04:12.095136 9407 kubeadm.go:322] W0517 18:02:16.068352 5170 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0517 12:04:12.095285 9407 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 I0517 12:04:12.095356 9407 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0517 12:04:12.095420 9407 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster I0517 12:04:12.095500 9407 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher I0517 12:04:12.105768 9407 kubeadm.go:403] StartCluster complete in 3m55.22094255s I0517 12:04:12.105791 9407 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0517 12:04:12.105836 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0517 12:04:12.121596 9407 cri.go:87] found id: "" I0517 12:04:12.121604 9407 logs.go:277] 0 containers: [] W0517 12:04:12.121609 9407 logs.go:279] No container was found matching "kube-apiserver" I0517 12:04:12.121612 9407 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0517 12:04:12.121651 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0517 12:04:12.136852 9407 cri.go:87] found id: "" I0517 12:04:12.136860 9407 logs.go:277] 0 containers: [] W0517 12:04:12.136865 9407 logs.go:279] No container was found matching "etcd" I0517 12:04:12.136868 9407 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0517 12:04:12.136913 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0517 12:04:12.151852 9407 cri.go:87] found id: "" I0517 12:04:12.151861 9407 logs.go:277] 0 containers: [] W0517 12:04:12.151865 9407 logs.go:279] No container was found matching "coredns" I0517 12:04:12.151869 9407 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0517 12:04:12.151910 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0517 12:04:12.166839 9407 cri.go:87] found id: "" I0517 12:04:12.166847 9407 logs.go:277] 0 containers: [] W0517 12:04:12.166851 9407 logs.go:279] No container was found matching "kube-scheduler" I0517 12:04:12.166855 9407 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0517 12:04:12.166897 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0517 12:04:12.181774 9407 cri.go:87] found id: "" I0517 12:04:12.181785 9407 logs.go:277] 0 containers: [] W0517 12:04:12.181793 9407 logs.go:279] No container was found matching "kube-proxy" I0517 12:04:12.181797 9407 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0517 12:04:12.181855 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0517 12:04:12.196946 9407 cri.go:87] found id: "" I0517 12:04:12.196955 9407 logs.go:277] 0 containers: [] W0517 12:04:12.196961 9407 logs.go:279] No container was found matching "kube-controller-manager" I0517 12:04:12.196973 9407 cri.go:52] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]} I0517 12:04:12.197015 9407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet I0517 12:04:12.211785 9407 cri.go:87] found id: "" I0517 12:04:12.211793 9407 logs.go:277] 0 containers: [] W0517 12:04:12.211797 9407 logs.go:279] No container was found matching "kindnet" I0517 12:04:12.211804 9407 logs.go:123] Gathering logs for describe nodes ... I0517 12:04:12.211812 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0517 12:04:12.282037 9407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: E0517 18:04:12.274746 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.275029 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.276299 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.277630 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.278833 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** E0517 18:04:12.274746 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.275029 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.276299 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.277630 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused E0517 18:04:12.278833 8509 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0517 12:04:12.282045 9407 logs.go:123] Gathering logs for Docker ... I0517 12:04:12.282050 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400" I0517 12:04:12.327270 9407 logs.go:123] Gathering logs for container status ... I0517 12:04:12.327282 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0517 12:04:12.344489 9407 logs.go:123] Gathering logs for kubelet ... I0517 12:04:12.344499 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0517 12:04:12.395549 9407 logs.go:123] Gathering logs for dmesg ... I0517 12:04:12.395559 9407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" W0517 12:04:12.405211 9407 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.19.0-41-generic OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled CGROUPS_IO: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0517 18:02:16.068352 5170 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0517 12:04:12.405226 9407 out.go:239] W0517 12:04:12.405413 9407 out.go:239] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.19.0-41-generic OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled CGROUPS_IO: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0517 18:02:16.068352 5170 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0517 12:04:12.405508 9407 out.go:239] W0517 12:04:12.406682 9407 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0517 12:04:12.408798 9407 out.go:177] W0517 12:04:12.409430 9407 out.go:239] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.19.0-41-generic OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled CGROUPS_IO: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0517 18:02:16.068352 5170 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0517 12:04:12.409522 9407 out.go:239] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0517 12:04:12.409560 9407 out.go:239] ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I0517 12:04:12.410227 9407 out.go:177] * * ==> Docker <== * -- Logs begin at Wed 2023-05-17 18:00:08 UTC, end at Wed 2023-05-17 18:09:38 UTC. -- May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox format\": invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 17 18:02:15 minikube cri-dockerd[1051]: time="2023-05-17T18:02:15Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 17 18:02:16 minikube cri-dockerd[1051]: time="2023-05-17T18:02:16Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 17 18:02:16 minikube cri-dockerd[1051]: time="2023-05-17T18:02:16Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 17 18:02:16 minikube cri-dockerd[1051]: time="2023-05-17T18:02:16Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 17 18:02:17 minikube systemd[1]: docker.service: Failed to add control inotify watch descriptor for control group /system.slice/docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: docker.service: Failed to add memory inotify watch descriptor for control group /system.slice/docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: cri-docker.service: Failed to add control inotify watch descriptor for control group /system.slice/cri-docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: cri-docker.service: Failed to add memory inotify watch descriptor for control group /system.slice/cri-docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: cri-docker.service: Failed to add control inotify watch descriptor for control group /system.slice/cri-docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: cri-docker.service: Failed to add memory inotify watch descriptor for control group /system.slice/cri-docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: docker.service: Failed to add control inotify watch descriptor for control group /system.slice/docker.service: No space left on device May 17 18:02:17 minikube systemd[1]: docker.service: Failed to add memory inotify watch descriptor for control group /system.slice/docker.service: No space left on device * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * * ==> describe nodes <== * * ==> dmesg <== * [May17 15:33] x86/cpu: SGX disabled by BIOS. [ +0.030046] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. [ +0.000000] #7 #8 #9 #10 #11 [ +0.023088] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ +1.772451] hpet_acpi_add: no address or irqs in _CRS [ +0.013048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ +0.000283] platform eisa.0: EISA: Cannot allocate resource for mainboard [ +0.000004] platform eisa.0: Cannot allocate resource for EISA slot 1 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 2 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 3 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 4 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 5 [ +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 7 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 8 [ +0.253282] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:04) [ +0.024660] usb: port power management may be unreliable [ +3.103721] nvidia_fs: loading out-of-tree module taints kernel. [ +0.040814] nvidia-fs:warning: error retrieving numa node for device 0000:02:00.0 [ +0.000006] nvidia-fs:warning: error retrieving numa node for device 0000:6e:00.0 [ +0.147190] soc_button_array ACPI0011:00: Unknown button index 0 upage 01 usage c6, ignoring [ +0.202447] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver [ +0.287467] thermal thermal_zone2: failed to read out thermal zone (-61) [ +0.215359] ucsi_acpi USBC000:00: unknown error 4100 [ +0.000005] ucsi_acpi USBC000:00: UCSI_GET_PDOS failed (-5) [ +0.092895] nvidia: module license 'NVIDIA' taints kernel. [ +0.000002] Disabling lock debugging due to kernel taint [ +0.164903] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 530.30.02 Wed Feb 22 04:11:39 UTC 2023 [ +0.278613] ucsi_acpi USBC000:00: unknown error 4100 [ +0.000019] ucsi_acpi USBC000:00: UCSI_GET_PDOS failed (-5) [ +0.193231] ACPI Warning: \_SB.PCI0.RP05.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20220331/nsarguments-61) [ +0.121092] psmouse serio1: Failed to deactivate mouse on isa0060/serio1: -71 [ +0.098916] psmouse serio1: Failed to enable mouse on isa0060/serio1 [ +0.021350] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint. [ +0.247783] VBoxNetFlt: Successfully started. [ +0.008219] VBoxNetAdp: Successfully started. [ +0.910206] psmouse serio1: Failed to enable mouse on isa0060/serio1 [May17 15:34] kauditd_printk_skb: 40 callbacks suppressed [ +18.511603] kauditd_printk_skb: 7 callbacks suppressed [May17 16:29] ACPI Error: No handler for Region [VRTC] (000000009b9eb358) [SystemCMOS] (20220331/evregion-130) [ +0.000046] ACPI Error: Region SystemCMOS (ID=5) has no handler (20220331/exfldio-261) [ +0.000050] No Local Variables are initialized for Method [_Q9A] [ +0.000012] No Arguments are initialized for method [_Q9A] [ +0.000015] ACPI Error: Aborting method \_SB.PCI0.LPCB.EC._Q9A due to previous error (AE_NOT_EXIST) (20220331/psparse-529) [May17 17:29] ACPI Error: No handler for Region [VRTC] (000000009b9eb358) [SystemCMOS] (20220331/evregion-130) [ +0.000048] ACPI Error: Region SystemCMOS (ID=5) has no handler (20220331/exfldio-261) [ +0.000051] No Local Variables are initialized for Method [_Q9A] [ +0.000011] No Arguments are initialized for method [_Q9A] [ +0.000016] ACPI Error: Aborting method \_SB.PCI0.LPCB.EC._Q9A due to previous error (AE_NOT_EXIST) (20220331/psparse-529) * * ==> kernel <== * 18:09:39 up 2:35, 0 users, load average: 2.20, 1.83, 1.35 Linux minikube 5.19.0-41-generic #42~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 18 17:40:00 UTC 2 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kubelet <== * -- Logs begin at Wed 2023-05-17 18:00:08 UTC, end at Wed 2023-05-17 18:09:39 UTC. -- May 17 18:09:13 minikube kubelet[16431]: W0517 18:09:13.957965 16431 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: W0517 18:09:13.957974 16431 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.958008 16431 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.958016 16431 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.961087 16431 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="docker" version="23.0.2" apiVersion="v1" May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.961249 16431 probe.go:234] Error recursively adding watch: no space left on device May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.961411 16431 server.go:1186] "Started kubelet" May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.961481 16431 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.961632 16431 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.176000a16d614292", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.May, 17, 18, 9, 13, 961390738, time.Local), LastTimestamp:time.Date(2023, time.May, 17, 18, 9, 13, 961390738, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping) May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.962427 16431 server.go:451] "Adding debug handlers to kubelet server" May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.962739 16431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.962828 16431 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"minikube\" not found" May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.962834 16431 volume_manager.go:293] "Starting Kubelet Volume Manager" May 17 18:09:13 minikube kubelet[16431]: I0517 18:09:13.962851 16431 desired_state_of_world_populator.go:151] "Desired state populator starts to run" May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.963112 16431 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: W0517 18:09:13.963183 16431 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.963418 16431 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:13 minikube kubelet[16431]: E0517 18:09:13.969249 16431 kubelet.go:1449] "Failed to start cAdvisor" err="inotify_add_watch /sys/fs/cgroup: no space left on device" May 17 18:09:13 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 18:09:13 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 18:09:14 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 193. May 17 18:09:14 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 17 18:09:14 minikube systemd[1]: kubelet.service: Failed to add control inotify watch descriptor for control group /system.slice/kubelet.service: No space left on device May 17 18:09:14 minikube systemd[1]: kubelet.service: Failed to add memory inotify watch descriptor for control group /system.slice/kubelet.service: No space left on device May 17 18:09:14 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.717617 16485 server.go:412] "Kubelet version" kubeletVersion="v1.26.3" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.717654 16485 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.717838 16485 server.go:836] "Client rotation is on, will bootstrap in background" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.718860 16485 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.719975 16485 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.720238 16485 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error adding watch for file /var/lib/minikube/certs/ca.crt: no space left on device" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.723805 16485 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.723919 16485 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.723949 16485 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.723965 16485 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.723974 16485 container_manager_linux.go:308] "Creating device plugin manager" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.724001 16485 state_mem.go:36] "Initialized new in-memory state store" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.733312 16485 kubelet.go:398] "Attempting to sync node with API server" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.733330 16485 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.733346 16485 kubelet.go:297] "Adding apiserver pod source" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.733356 16485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.733439 16485 file_linux.go:61] "Unable to read config path" err="unable to create inotify for path \"/etc/kubernetes/manifests\": no space left on device" path="/etc/kubernetes/manifests" May 17 18:09:14 minikube kubelet[16485]: W0517 18:09:14.733951 16485 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: W0517 18:09:14.733972 16485 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.733987 16485 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.734010 16485 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.737158 16485 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="docker" version="23.0.2" apiVersion="v1" May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.737329 16485 probe.go:234] Error recursively adding watch: no space left on device May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.737493 16485 server.go:1186] "Started kubelet" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.737567 16485 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.737771 16485 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.176000a19ba36620", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.May, 17, 18, 9, 14, 737477152, time.Local), LastTimestamp:time.Date(2023, time.May, 17, 18, 9, 14, 737477152, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping) May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.738798 16485 server.go:451] "Adding debug handlers to kubelet server" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.739311 16485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.739461 16485 volume_manager.go:293] "Starting Kubelet Volume Manager" May 17 18:09:14 minikube kubelet[16485]: I0517 18:09:14.739518 16485 desired_state_of_world_populator.go:151] "Desired state populator starts to run" May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.739909 16485 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: W0517 18:09:14.740010 16485 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.740084 16485 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused May 17 18:09:14 minikube kubelet[16485]: E0517 18:09:14.745834 16485 kubelet.go:1449] "Failed to start cAdvisor" err="inotify_add_watch /sys/fs/cgroup: no space left on device" May 17 18:09:14 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE