* * ==> Audit <== * |------------|------|----------|-------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|------|----------|-------|---------|-------------------------------|-------------------------------| | completion | bash | minikube | zsola | v1.25.1 | Tue, 01 Feb 2022 15:20:05 CET | Tue, 01 Feb 2022 15:20:05 CET | | start | | minikube | zsola | v1.25.1 | Tue, 01 Feb 2022 15:23:41 CET | Tue, 01 Feb 2022 15:24:33 CET | | ip | | minikube | zsola | v1.25.1 | Tue, 01 Feb 2022 15:26:14 CET | Tue, 01 Feb 2022 15:26:14 CET | | ssh | | minikube | zsola | v1.25.1 | Tue, 01 Feb 2022 15:27:48 CET | Tue, 01 Feb 2022 15:28:33 CET | | start | | minikube | zsola | v1.25.1 | Tue, 01 Feb 2022 15:31:34 CET | Tue, 01 Feb 2022 15:31:55 CET | | start | | minikube | zsola | v1.25.1 | Wed, 02 Feb 2022 06:32:34 CET | Wed, 02 Feb 2022 06:32:54 CET | |------------|------|----------|-------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/02 06:32:34 Running on machine: minikubeVM Binary: Built with gc go1.17.5 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0202 06:32:34.556607 4450 out.go:297] Setting OutFile to fd 1 ... I0202 06:32:34.557150 4450 out.go:349] isatty.IsTerminal(1) = true I0202 06:32:34.557153 4450 out.go:310] Setting ErrFile to fd 2... I0202 06:32:34.557157 4450 out.go:349] isatty.IsTerminal(2) = true I0202 06:32:34.557522 4450 root.go:315] Updating PATH: /home/zsola/.minikube/bin W0202 06:32:34.557895 4450 root.go:293] Error reading config file at /home/zsola/.minikube/config/config.json: open /home/zsola/.minikube/config/config.json: no such file or directory I0202 06:32:34.558272 4450 out.go:304] Setting JSON to false I0202 06:32:34.578985 4450 start.go:112] hostinfo: {"hostname":"minikubeVM","uptime":1875,"bootTime":1643778079,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"21.10","kernelVersion":"5.13.0-28-generic","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"11076645-c024-449f-9b72-6d386cffd6e3"} I0202 06:32:34.579022 4450 start.go:122] virtualization: I0202 06:32:34.581008 4450 out.go:176] ๐Ÿ˜„ minikube v1.25.1 on Ubuntu 21.10 I0202 06:32:34.581804 4450 notify.go:174] Checking for updates... I0202 06:32:34.582509 4450 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.1 I0202 06:32:34.583329 4450 driver.go:344] Setting default libvirt URI to qemu:///system I0202 06:32:34.716289 4450 docker.go:132] docker version: linux-20.10.7 I0202 06:32:34.716815 4450 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0202 06:32:34.747104 4450 info.go:263] docker info: {ID:4FZK:Y7WN:DD5O:7TLH:4TLR:G3ZT:ES6C:UN5I:GKR5:HVJX:LRTN:3VXT Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:false NGoroutines:33 SystemTime:2022-02-02 06:32:34.737680725 +0100 CET LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.13.0-28-generic OperatingSystem:Ubuntu 21.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:12105596928 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikubeVM Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0202 06:32:34.747159 4450 docker.go:237] overlay module found I0202 06:32:34.748502 4450 out.go:176] โœจ Using the docker driver based on existing profile I0202 06:32:34.748545 4450 start.go:280] selected driver: docker I0202 06:32:34.748548 4450 start.go:795] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/zsola:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:} I0202 06:32:34.748635 4450 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0202 06:32:34.748650 4450 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m". I0202 06:32:34.748751 4450 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0202 06:32:34.778036 4450 info.go:263] docker info: {ID:4FZK:Y7WN:DD5O:7TLH:4TLR:G3ZT:ES6C:UN5I:GKR5:HVJX:LRTN:3VXT Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:false NGoroutines:33 SystemTime:2022-02-02 06:32:34.770179022 +0100 CET LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.13.0-28-generic OperatingSystem:Ubuntu 21.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:12105596928 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikubeVM Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0202 06:32:34.802020 4450 cni.go:93] Creating CNI manager for "" I0202 06:32:34.802027 4450 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0202 06:32:34.802031 4450 start_flags.go:300] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/zsola:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:} I0202 06:32:34.804771 4450 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0202 06:32:34.805171 4450 cache.go:120] Beginning downloading kic base image for docker with docker I0202 06:32:34.806156 4450 out.go:176] ๐Ÿšœ Pulling base image ... I0202 06:32:34.806173 4450 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker I0202 06:32:34.806201 4450 preload.go:148] Found local preload: /home/zsola/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4 I0202 06:32:34.806204 4450 cache.go:57] Caching tarball of preloaded images I0202 06:32:34.806367 4450 preload.go:174] Found /home/zsola/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0202 06:32:34.806378 4450 cache.go:60] Finished verifying existence of preloaded tar for v1.23.1 on docker I0202 06:32:34.806453 4450 profile.go:147] Saving config to /home/zsola/.minikube/profiles/minikube/config.json ... I0202 06:32:34.806520 4450 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon I0202 06:32:34.835720 4450 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull I0202 06:32:34.835744 4450 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load I0202 06:32:34.835749 4450 cache.go:208] Successfully downloaded all kic artifacts I0202 06:32:34.835767 4450 start.go:313] acquiring machines lock for minikube: {Name:mk5cd2b89d84d94ddf1bfb38726d60c9d1f00a6c Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0202 06:32:34.835876 4450 start.go:317] acquired machines lock for "minikube" in 98.088ยตs I0202 06:32:34.835884 4450 start.go:93] Skipping create...Using existing machine configuration I0202 06:32:34.835887 4450 fix.go:55] fixHost starting: I0202 06:32:34.836017 4450 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0202 06:32:34.876964 4450 fix.go:108] recreateIfNeeded on minikube: state=Stopped err= W0202 06:32:34.876976 4450 fix.go:134] unexpected machine state, will restart: I0202 06:32:34.878313 4450 out.go:176] ๐Ÿ”„ Restarting existing docker container for "minikube" ... I0202 06:32:34.878373 4450 cli_runner.go:133] Run: docker start minikube I0202 06:32:35.304057 4450 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0202 06:32:35.330875 4450 kic.go:420] container "minikube" state is running. I0202 06:32:35.331179 4450 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0202 06:32:35.358110 4450 profile.go:147] Saving config to /home/zsola/.minikube/profiles/minikube/config.json ... I0202 06:32:35.358228 4450 machine.go:88] provisioning docker machine ... I0202 06:32:35.358241 4450 ubuntu.go:169] provisioning hostname "minikube" I0202 06:32:35.358265 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:35.387409 4450 main.go:130] libmachine: Using SSH client type: native I0202 06:32:35.387684 4450 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0a80] 0x7a3b60 [] 0s} 127.0.0.1 49157 } I0202 06:32:35.387691 4450 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0202 06:32:35.388685 4450 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42592->127.0.0.1:49157: read: connection reset by peer I0202 06:32:38.558966 4450 main.go:130] libmachine: SSH cmd err, output: : minikube I0202 06:32:38.558998 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:38.582472 4450 main.go:130] libmachine: Using SSH client type: native I0202 06:32:38.582555 4450 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0a80] 0x7a3b60 [] 0s} 127.0.0.1 49157 } I0202 06:32:38.582563 4450 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0202 06:32:38.713014 4450 main.go:130] libmachine: SSH cmd err, output: : I0202 06:32:38.713035 4450 ubuntu.go:175] set auth options {CertDir:/home/zsola/.minikube CaCertPath:/home/zsola/.minikube/certs/ca.pem CaPrivateKeyPath:/home/zsola/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/zsola/.minikube/machines/server.pem ServerKeyPath:/home/zsola/.minikube/machines/server-key.pem ClientKeyPath:/home/zsola/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/zsola/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/zsola/.minikube} I0202 06:32:38.713051 4450 ubuntu.go:177] setting up certificates I0202 06:32:38.713081 4450 provision.go:83] configureAuth start I0202 06:32:38.713135 4450 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0202 06:32:38.737724 4450 provision.go:138] copyHostCerts I0202 06:32:38.738556 4450 exec_runner.go:144] found /home/zsola/.minikube/ca.pem, removing ... I0202 06:32:38.738560 4450 exec_runner.go:207] rm: /home/zsola/.minikube/ca.pem I0202 06:32:38.738592 4450 exec_runner.go:151] cp: /home/zsola/.minikube/certs/ca.pem --> /home/zsola/.minikube/ca.pem (1074 bytes) I0202 06:32:38.738761 4450 exec_runner.go:144] found /home/zsola/.minikube/cert.pem, removing ... I0202 06:32:38.738763 4450 exec_runner.go:207] rm: /home/zsola/.minikube/cert.pem I0202 06:32:38.738774 4450 exec_runner.go:151] cp: /home/zsola/.minikube/certs/cert.pem --> /home/zsola/.minikube/cert.pem (1119 bytes) I0202 06:32:38.738920 4450 exec_runner.go:144] found /home/zsola/.minikube/key.pem, removing ... I0202 06:32:38.738922 4450 exec_runner.go:207] rm: /home/zsola/.minikube/key.pem I0202 06:32:38.738932 4450 exec_runner.go:151] cp: /home/zsola/.minikube/certs/key.pem --> /home/zsola/.minikube/key.pem (1679 bytes) I0202 06:32:38.739096 4450 provision.go:112] generating server cert: /home/zsola/.minikube/machines/server.pem ca-key=/home/zsola/.minikube/certs/ca.pem private-key=/home/zsola/.minikube/certs/ca-key.pem org=zsola.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0202 06:32:38.860187 4450 provision.go:172] copyRemoteCerts I0202 06:32:38.860222 4450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0202 06:32:38.860239 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:38.882834 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:38.971900 4450 ssh_runner.go:362] scp /home/zsola/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0202 06:32:38.986611 4450 ssh_runner.go:362] scp /home/zsola/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0202 06:32:38.999973 4450 ssh_runner.go:362] scp /home/zsola/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0202 06:32:39.010740 4450 provision.go:86] duration metric: configureAuth took 297.652245ms I0202 06:32:39.010750 4450 ubuntu.go:193] setting minikube options for container-runtime I0202 06:32:39.010858 4450 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.1 I0202 06:32:39.010880 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.033757 4450 main.go:130] libmachine: Using SSH client type: native I0202 06:32:39.033841 4450 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0a80] 0x7a3b60 [] 0s} 127.0.0.1 49157 } I0202 06:32:39.033845 4450 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0202 06:32:39.166127 4450 main.go:130] libmachine: SSH cmd err, output: : overlay I0202 06:32:39.166188 4450 ubuntu.go:71] root file system type: overlay I0202 06:32:39.166420 4450 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0202 06:32:39.166468 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.191539 4450 main.go:130] libmachine: Using SSH client type: native I0202 06:32:39.191624 4450 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0a80] 0x7a3b60 [] 0s} 127.0.0.1 49157 } I0202 06:32:39.191663 4450 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0202 06:32:39.334843 4450 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0202 06:32:39.334878 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.357737 4450 main.go:130] libmachine: Using SSH client type: native I0202 06:32:39.357856 4450 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0a80] 0x7a3b60 [] 0s} 127.0.0.1 49157 } I0202 06:32:39.357864 4450 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0202 06:32:39.487086 4450 main.go:130] libmachine: SSH cmd err, output: : I0202 06:32:39.487101 4450 machine.go:91] provisioned docker machine in 4.128866363s I0202 06:32:39.487107 4450 start.go:267] post-start starting for "minikube" (driver="docker") I0202 06:32:39.487112 4450 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0202 06:32:39.487163 4450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0202 06:32:39.487186 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.511385 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:39.605964 4450 ssh_runner.go:195] Run: cat /etc/os-release I0202 06:32:39.608994 4450 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0202 06:32:39.609003 4450 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0202 06:32:39.609009 4450 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0202 06:32:39.609012 4450 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0202 06:32:39.609018 4450 filesync.go:126] Scanning /home/zsola/.minikube/addons for local assets ... I0202 06:32:39.609422 4450 filesync.go:126] Scanning /home/zsola/.minikube/files for local assets ... I0202 06:32:39.609799 4450 start.go:270] post-start completed in 122.685619ms I0202 06:32:39.609820 4450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0202 06:32:39.609842 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.633011 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:39.728346 4450 fix.go:57] fixHost completed within 4.892454874s I0202 06:32:39.728359 4450 start.go:80] releasing machines lock for "minikube", held for 4.89247751s I0202 06:32:39.728418 4450 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0202 06:32:39.750964 4450 ssh_runner.go:195] Run: systemctl --version I0202 06:32:39.751006 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.751026 4450 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0202 06:32:39.751052 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:39.777994 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:39.783639 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:39.873181 4450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0202 06:32:40.194252 4450 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0202 06:32:40.205587 4450 cruntime.go:272] skipping containerd shutdown because we are bound to it I0202 06:32:40.205642 4450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0202 06:32:40.213615 4450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0202 06:32:40.221181 4450 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0202 06:32:40.291800 4450 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0202 06:32:40.362711 4450 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0202 06:32:40.369590 4450 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0202 06:32:40.460600 4450 ssh_runner.go:195] Run: sudo systemctl start docker I0202 06:32:40.467043 4450 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0202 06:32:40.591266 4450 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0202 06:32:40.623340 4450 out.go:203] ๐Ÿณ Preparing Kubernetes v1.23.1 on Docker 20.10.12 ... I0202 06:32:40.623402 4450 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0202 06:32:40.646768 4450 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0202 06:32:40.649828 4450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0202 06:32:40.660419 4450 out.go:176] โ–ช kubelet.housekeeping-interval=5m I0202 06:32:40.660610 4450 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker I0202 06:32:40.660652 4450 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0202 06:32:40.686984 4450 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.1 k8s.gcr.io/kube-proxy:v1.23.1 k8s.gcr.io/kube-controller-manager:v1.23.1 k8s.gcr.io/kube-scheduler:v1.23.1 asboth.sytes.net:5000/osysite:latest k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0202 06:32:40.686992 4450 docker.go:537] Images already preloaded, skipping extraction I0202 06:32:40.687016 4450 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0202 06:32:40.713971 4450 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.1 k8s.gcr.io/kube-proxy:v1.23.1 k8s.gcr.io/kube-controller-manager:v1.23.1 k8s.gcr.io/kube-scheduler:v1.23.1 asboth.sytes.net:5000/osysite:latest k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0202 06:32:40.713979 4450 cache_images.go:84] Images are preloaded, skipping loading I0202 06:32:40.714003 4450 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0202 06:32:40.924968 4450 cni.go:93] Creating CNI manager for "" I0202 06:32:40.924975 4450 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0202 06:32:40.924996 4450 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0202 06:32:40.925003 4450 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0202 06:32:40.925068 4450 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0202 06:32:40.925108 4450 kubeadm.go:791] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0202 06:32:40.925132 4450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1 I0202 06:32:40.930567 4450 binaries.go:44] Found k8s binaries, skipping transfer I0202 06:32:40.930589 4450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0202 06:32:40.934685 4450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0202 06:32:40.941887 4450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0202 06:32:40.949157 4450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes) I0202 06:32:40.957552 4450 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0202 06:32:40.959518 4450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0202 06:32:40.965155 4450 certs.go:54] Setting up /home/zsola/.minikube/profiles/minikube for IP: 192.168.49.2 I0202 06:32:40.965209 4450 certs.go:182] skipping minikubeCA CA generation: /home/zsola/.minikube/ca.key I0202 06:32:40.965386 4450 certs.go:182] skipping proxyClientCA CA generation: /home/zsola/.minikube/proxy-client-ca.key I0202 06:32:40.965433 4450 certs.go:298] skipping minikube-user signed cert generation: /home/zsola/.minikube/profiles/minikube/client.key I0202 06:32:40.965742 4450 certs.go:298] skipping minikube signed cert generation: /home/zsola/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0202 06:32:40.966003 4450 certs.go:298] skipping aggregator signed cert generation: /home/zsola/.minikube/profiles/minikube/proxy-client.key I0202 06:32:40.966073 4450 certs.go:388] found cert: /home/zsola/.minikube/certs/home/zsola/.minikube/certs/ca-key.pem (1679 bytes) I0202 06:32:40.966087 4450 certs.go:388] found cert: /home/zsola/.minikube/certs/home/zsola/.minikube/certs/ca.pem (1074 bytes) I0202 06:32:40.966096 4450 certs.go:388] found cert: /home/zsola/.minikube/certs/home/zsola/.minikube/certs/cert.pem (1119 bytes) I0202 06:32:40.966105 4450 certs.go:388] found cert: /home/zsola/.minikube/certs/home/zsola/.minikube/certs/key.pem (1679 bytes) I0202 06:32:40.966625 4450 ssh_runner.go:362] scp /home/zsola/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0202 06:32:40.978255 4450 ssh_runner.go:362] scp /home/zsola/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0202 06:32:40.988953 4450 ssh_runner.go:362] scp /home/zsola/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0202 06:32:41.000240 4450 ssh_runner.go:362] scp /home/zsola/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0202 06:32:41.011516 4450 ssh_runner.go:362] scp /home/zsola/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0202 06:32:41.023382 4450 ssh_runner.go:362] scp /home/zsola/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0202 06:32:41.035251 4450 ssh_runner.go:362] scp /home/zsola/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0202 06:32:41.046705 4450 ssh_runner.go:362] scp /home/zsola/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0202 06:32:41.058246 4450 ssh_runner.go:362] scp /home/zsola/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0202 06:32:41.069716 4450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0202 06:32:41.078537 4450 ssh_runner.go:195] Run: openssl version I0202 06:32:41.084632 4450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0202 06:32:41.089802 4450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0202 06:32:41.091979 4450 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 1 14:24 /usr/share/ca-certificates/minikubeCA.pem I0202 06:32:41.092008 4450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0202 06:32:41.095012 4450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0202 06:32:41.099265 4450 kubeadm.go:388] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/zsola:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:} I0202 06:32:41.099334 4450 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0202 06:32:41.119871 4450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0202 06:32:41.124740 4450 ssh_runner.go:195] Run: sudo test -d /data/minikube I0202 06:32:41.129218 4450 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0202 06:32:41.129685 4450 kubeconfig.go:92] found "minikube" server: "https://192.168.49.2:8443" I0202 06:32:41.135248 4450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0202 06:32:41.139543 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:41.139559 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:41.148171 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:41.348979 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:41.349057 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:41.365210 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:41.548489 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:41.548552 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:41.561421 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:41.748980 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:41.749042 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:41.764181 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:41.948568 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:41.948629 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:41.964461 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:42.148995 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:42.149051 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:42.160707 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:42.349501 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:42.349570 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:42.364823 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:42.548490 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:42.548566 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:42.564759 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:42.749208 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:42.749268 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:42.758578 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:42.949334 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:42.949397 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:42.965735 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:43.149448 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:43.149513 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:43.163596 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:43.349191 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:43.349253 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:43.366230 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:43.548568 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:43.548639 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:43.563169 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:43.748613 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:43.748673 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:43.765668 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:43.949262 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:43.949326 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:43.961975 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:44.148518 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:44.148579 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:44.167073 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0202 06:32:44.167081 4450 api_server.go:165] Checking apiserver status ... I0202 06:32:44.167108 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0202 06:32:44.175453 4450 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: W0202 06:32:44.175462 4450 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition I0202 06:32:44.175474 4450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0202 06:32:45.333069 4450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.157586001s) I0202 06:32:45.333099 4450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0202 06:32:45.338972 4450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0202 06:32:45.343228 4450 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver I0202 06:32:45.343245 4450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0202 06:32:45.347178 4450 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0202 06:32:45.347192 4450 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0202 06:32:45.859253 4450 out.go:203] โ–ช Generating certificates and keys ... I0202 06:32:46.645096 4450 out.go:203] โ–ช Booting up control plane ... I0202 06:32:52.671199 4450 out.go:203] โ–ช Configuring RBAC rules ... I0202 06:32:53.091196 4450 cni.go:93] Creating CNI manager for "" I0202 06:32:53.091204 4450 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0202 06:32:53.091217 4450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0202 06:32:53.091279 4450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0202 06:32:53.091308 4450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=3e64b11ed75e56e4898ea85f96b2e4af0301f43d minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_02_02T06_32_53_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0202 06:32:53.376873 4450 ops.go:34] apiserver oom_adj: -16 I0202 06:32:53.376881 4450 kubeadm.go:867] duration metric: took 285.621714ms to wait for elevateKubeSystemPrivileges. I0202 06:32:53.376889 4450 kubeadm.go:390] StartCluster complete in 12.277628505s I0202 06:32:53.376902 4450 settings.go:142] acquiring lock: {Name:mkdc3f40cd5764a639097c89418fdba129dacfc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0202 06:32:53.377004 4450 settings.go:150] Updating kubeconfig: /home/zsola/.kube/config I0202 06:32:53.377586 4450 lock.go:35] WriteFile acquiring /home/zsola/.kube/config: {Name:mk3c96644ed478fca2e0cec11cf8ff681873cf1c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0202 06:32:53.892031 4450 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0202 06:32:53.892054 4450 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true} I0202 06:32:53.893685 4450 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I0202 06:32:53.893750 4450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0202 06:32:53.892143 4450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0202 06:32:53.892145 4450 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[] I0202 06:32:53.893789 4450 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0202 06:32:53.893795 4450 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0202 06:32:53.893797 4450 addons.go:165] addon storage-provisioner should already be in state true I0202 06:32:53.892264 4450 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.1 I0202 06:32:53.893810 4450 host.go:66] Checking if "minikube" exists ... I0202 06:32:53.893819 4450 addons.go:65] Setting default-storageclass=true in profile "minikube" I0202 06:32:53.893825 4450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0202 06:32:53.893951 4450 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0202 06:32:53.894007 4450 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0202 06:32:53.901854 4450 api_server.go:51] waiting for apiserver process to appear ... I0202 06:32:53.901884 4450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0202 06:32:53.944713 4450 addons.go:153] Setting addon default-storageclass=true in "minikube" W0202 06:32:53.944720 4450 addons.go:165] addon default-storageclass should already be in state true I0202 06:32:53.944734 4450 host.go:66] Checking if "minikube" exists ... I0202 06:32:53.944932 4450 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0202 06:32:53.949031 4450 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0202 06:32:53.949143 4450 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0202 06:32:53.949150 4450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0202 06:32:53.949202 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:53.965997 4450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0202 06:32:53.966055 4450 api_server.go:71] duration metric: took 73.99192ms to wait for apiserver process to appear ... I0202 06:32:53.966060 4450 api_server.go:87] waiting for apiserver healthz status ... I0202 06:32:53.966065 4450 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0202 06:32:53.985825 4450 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0202 06:32:53.987317 4450 api_server.go:140] control plane version: v1.23.1 I0202 06:32:53.987326 4450 api_server.go:130] duration metric: took 21.26249ms to wait for apiserver health ... I0202 06:32:53.987361 4450 system_pods.go:43] waiting for kube-system pods to appear ... I0202 06:32:54.000583 4450 system_pods.go:59] 4 kube-system pods found I0202 06:32:54.000595 4450 system_pods.go:61] "etcd-minikube" [b32ec5a4-b2c7-4219-ad25-020508fa4e0e] Pending I0202 06:32:54.000599 4450 system_pods.go:61] "kube-apiserver-minikube" [945e3834-06cd-4bc6-adcc-0acd9a98681b] Pending I0202 06:32:54.000602 4450 system_pods.go:61] "kube-controller-manager-minikube" [1789bf03-6c7e-469b-bb68-73bfe6579876] Pending I0202 06:32:54.000605 4450 system_pods.go:61] "kube-scheduler-minikube" [d7901566-ec46-4d32-96cf-3c908bebaa04] Pending I0202 06:32:54.000608 4450 system_pods.go:74] duration metric: took 13.243922ms to wait for pod list to return data ... I0202 06:32:54.000614 4450 kubeadm.go:542] duration metric: took 108.550564ms to wait for : map[apiserver:true system_pods:true] ... I0202 06:32:54.000623 4450 node_conditions.go:102] verifying NodePressure condition ... I0202 06:32:54.002027 4450 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0202 06:32:54.002035 4450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0202 06:32:54.002066 4450 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0202 06:32:54.007061 4450 node_conditions.go:122] node storage ephemeral capacity is 308003680Ki I0202 06:32:54.007085 4450 node_conditions.go:123] node cpu capacity is 4 I0202 06:32:54.007092 4450 node_conditions.go:105] duration metric: took 6.467067ms to run NodePressure ... I0202 06:32:54.007099 4450 start.go:211] waiting for startup goroutines ... I0202 06:32:54.009569 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:54.040012 4450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/zsola/.minikube/machines/minikube/id_rsa Username:docker} I0202 06:32:54.123996 4450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0202 06:32:54.141226 4450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0202 06:32:54.488997 4450 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0202 06:32:54.522211 4450 out.go:176] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0202 06:32:54.522267 4450 addons.go:417] enableAddons completed in 630.129072ms I0202 06:32:54.639419 4450 start.go:493] kubectl: 1.23.3, cluster: 1.23.1 (minor skew: 0) I0202 06:32:54.641577 4450 out.go:176] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Wed 2022-02-02 05:32:35 UTC, end at Wed 2022-02-02 05:33:31 UTC. -- Feb 02 05:32:35 minikube systemd[1]: Starting Docker Application Container Engine... Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.834839238Z" level=info msg="Starting up" Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.840615974Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.840644830Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.840659779Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.840665494Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.843958868Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.843987625Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.843998971Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.844006651Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.853774568Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 02 05:32:35 minikube dockerd[132]: time="2022-02-02T05:32:35.891071962Z" level=info msg="Loading containers: start." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.095236820Z" level=info msg="Removing stale sandbox 0f107f74288a77713a7a98eecf6c05822c237c818ae57f3f8fc7e822b02f482d (beeaf0a8d5bfd7ab4b05038b6cedca74141700ca5469a531fbe7ebe870d27632)" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.097974720Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b891889528a0bca942aeb0a2f7c154cee6776b940a766da8bf0f890d37fa12f2 4bed8b7e3983bacaf593ba74f9129d15e62752be9ea10814e5141320a826dcca], retrying...." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.157773062Z" level=info msg="Removing stale sandbox 5631f2458952c172cdd993f57c53725c178cf06fbb1245c2faa845f19101a233 (bad41a6a3a1ce08dd8f538236d47e9f385d11dfddd38e9e7d29d1adba8ea0073)" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.159188017Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b891889528a0bca942aeb0a2f7c154cee6776b940a766da8bf0f890d37fa12f2 8e3277966c84fbfdcdf2c5618389bbe33c2d744de3c0a93b04ae8ab256c3b49f], retrying...." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.221740547Z" level=info msg="Removing stale sandbox 7e74da258aa0582bef7e7930e468bbc3e966134df0b9aa8ab83fc6b12267292b (a646600307d7ced11ffefd0b43d06990be4578150f164383f9d983bdf5e098d0)" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.223550208Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b891889528a0bca942aeb0a2f7c154cee6776b940a766da8bf0f890d37fa12f2 82c7db4c4dec589bb8359ed316056fa0c27b795532787a2ebc9b3ea3f8f6474b], retrying...." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.286514294Z" level=info msg="Removing stale sandbox ae0c0446117b64728fd7377f2917962ca10ceb1cfa6bfd0178bf1eb0917a8f1f (4078a0885f03fad5a824eb25450f35bb6660dbb40d83b987f0022cfc9790ffa6)" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.288000448Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b891889528a0bca942aeb0a2f7c154cee6776b940a766da8bf0f890d37fa12f2 5c4120de19921e87c8a832edef64a6555e5c5c1fe905d777b5761eae3600d7d4], retrying...." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.310114252Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.360034762Z" level=info msg="Loading containers: done." Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.391763302Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.392311688Z" level=info msg="Daemon has completed initialization" Feb 02 05:32:36 minikube systemd[1]: Started Docker Application Container Engine. Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.407755311Z" level=info msg="API listen on [::]:2376" Feb 02 05:32:36 minikube dockerd[132]: time="2022-02-02T05:32:36.410976802Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 2ffd7bf1a973c 6e38f40d628db 24 seconds ago Running storage-provisioner 0 ab4c1e647a627 90fbfb345a216 b46c42588d511 25 seconds ago Running kube-proxy 0 86627f696f7ed 1ea55ff146a18 a4ca41631cc7a 25 seconds ago Running coredns 0 79854bb399f80 8aa532da72952 b6d7abedde399 44 seconds ago Running kube-apiserver 2 15e350b7f8426 aa63184eb9c33 f51846a4fd288 44 seconds ago Running kube-controller-manager 2 808c706d43d03 ff433b986aaed 25f8c7f3da61c 44 seconds ago Running etcd 2 b5b114fd84758 a42448673b540 71d575efe6283 44 seconds ago Running kube-scheduler 2 c0239ccbcf8df * * ==> coredns [1ea55ff146a1] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=3e64b11ed75e56e4898ea85f96b2e4af0301f43d minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_02_02T06_32_53_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 02 Feb 2022 05:32:50 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 02 Feb 2022 05:33:23 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 02 Feb 2022 05:32:53 +0000 Wed, 02 Feb 2022 05:32:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 02 Feb 2022 05:32:53 +0000 Wed, 02 Feb 2022 05:32:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 02 Feb 2022 05:32:53 +0000 Wed, 02 Feb 2022 05:32:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 02 Feb 2022 05:32:53 +0000 Wed, 02 Feb 2022 05:32:53 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 308003680Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14768944Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 308003680Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14768944Ki pods: 110 System Info: Machine ID: 8de776e053e140d6a14c2d2def3d6bb8 System UUID: e7cdfd95-181f-4984-9d73-57a447ce0d0d Boot ID: e1f27cc0-d92b-4d81-8569-dd0d85566940 Kernel Version: 5.13.0-28-generic OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.1 Kube-Proxy Version: v1.23.1 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-wdrpr 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 25s kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38s kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38s kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38s kube-system kube-proxy-vlnzp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26s kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 37s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (18%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 24s kube-proxy Normal NodeHasSufficientMemory 38s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 38s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 38s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 38s kubelet Updated Node Allocatable limit across pods Normal NodeReady 38s kubelet Node minikube status is now: NodeReady Normal Starting 38s kubelet Starting kubelet. * * ==> dmesg <== * [Feb 2 05:01] PCI: System does not support PCI [ +4.416371] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised. [ +0.657414] kauditd_printk_skb: 33 callbacks suppressed * * ==> etcd [ff433b986aae] <== * {"level":"info","ts":"2022-02-02T05:32:48.041Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2022-02-02T05:32:48.042Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-02-02T05:32:48.042Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-02T05:32:48.043Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2022-02-02T05:32:48.043Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"e8732fb5f","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2022-02-02T05:32:48.047Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.42547ms"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2022-02-02T05:32:48.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2022-02-02T05:32:48.060Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2022-02-02T05:32:48.062Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2022-02-02T05:32:48.064Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2022-02-02T05:32:48.065Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"} {"level":"info","ts":"2022-02-02T05:32:48.066Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2022-02-02T05:32:48.067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2022-02-02T05:32:48.067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-02-02T05:32:48.068Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-02T05:32:48.068Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-02T05:32:48.068Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-02T05:32:48.068Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-02-02T05:32:48.068Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-02-02T05:32:48.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-02-02T05:32:48.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-02-02T05:32:48.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-02-02T05:32:48.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-02T05:32:48.459Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-02T05:32:48.460Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-02-02T05:32:48.460Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-02T05:32:48.460Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-02T05:32:48.464Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-02T05:32:48.464Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} * * ==> kernel <== * 05:33:31 up 32 min, 0 users, load average: 0.61, 0.34, 0.27 Linux minikube 5.13.0-28-generic #31-Ubuntu SMP Thu Jan 13 17:41:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [8aa532da7295] <== * W0202 05:32:49.535972 1 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources. W0202 05:32:49.536010 1 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources. W0202 05:32:49.537564 1 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0202 05:32:49.540318 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0202 05:32:49.540338 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0202 05:32:49.557129 1 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0202 05:32:50.195403 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0202 05:32:50.195506 1 secure_serving.go:266] Serving securely on [::]:8443 I0202 05:32:50.195534 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0202 05:32:50.199251 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0202 05:32:50.199545 1 apf_controller.go:317] Starting API Priority and Fairness config controller I0202 05:32:50.199573 1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0202 05:32:50.208826 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0202 05:32:50.209033 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0202 05:32:50.209041 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0202 05:32:50.209078 1 controller.go:83] Starting OpenAPI AggregationController I0202 05:32:50.209117 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0202 05:32:50.233234 1 autoregister_controller.go:141] Starting autoregister controller I0202 05:32:50.233265 1 cache.go:32] Waiting for caches to sync for autoregister controller I0202 05:32:50.234199 1 available_controller.go:491] Starting AvailableConditionController I0202 05:32:50.234216 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0202 05:32:50.236575 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0202 05:32:50.236633 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0202 05:32:50.240151 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0202 05:32:50.240206 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0202 05:32:50.242082 1 controller.go:85] Starting OpenAPI controller I0202 05:32:50.242111 1 naming_controller.go:291] Starting NamingConditionController I0202 05:32:50.242127 1 establishing_controller.go:76] Starting EstablishingController I0202 05:32:50.243745 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0202 05:32:50.243796 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0202 05:32:50.243813 1 crd_finalizer.go:266] Starting CRDFinalizer I0202 05:32:50.261419 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0202 05:32:50.263116 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0202 05:32:50.279183 1 controller.go:611] quota admission added evaluator for: namespaces I0202 05:32:50.299591 1 apf_controller.go:322] Running API Priority and Fairness config worker I0202 05:32:50.313222 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0202 05:32:50.320972 1 shared_informer.go:247] Caches are synced for node_authorizer I0202 05:32:50.333795 1 cache.go:39] Caches are synced for autoregister controller I0202 05:32:50.334242 1 cache.go:39] Caches are synced for AvailableConditionController controller I0202 05:32:50.336760 1 shared_informer.go:247] Caches are synced for crd-autoregister I0202 05:32:50.340309 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0202 05:32:51.195767 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0202 05:32:51.199337 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0202 05:32:51.237669 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0202 05:32:51.239762 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0202 05:32:51.239802 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0202 05:32:51.558902 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0202 05:32:51.580914 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0202 05:32:51.689002 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0202 05:32:51.693069 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0202 05:32:51.694121 1 controller.go:611] quota admission added evaluator for: endpoints I0202 05:32:51.702013 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0202 05:32:52.359791 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0202 05:32:52.881872 1 controller.go:611] quota admission added evaluator for: deployments.apps I0202 05:32:52.888987 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0202 05:32:52.897900 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0202 05:32:53.074720 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0202 05:33:05.864190 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0202 05:33:05.965101 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0202 05:33:07.191915 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [aa63184eb9c3] <== * I0202 05:33:04.962589 1 controllermanager.go:605] Started "pvc-protection" I0202 05:33:04.962754 1 pvc_protection_controller.go:103] "Starting PVC protection controller" I0202 05:33:04.962775 1 shared_informer.go:240] Waiting for caches to sync for PVC protection I0202 05:33:05.114297 1 controllermanager.go:605] Started "root-ca-cert-publisher" I0202 05:33:05.116592 1 publisher.go:107] Starting root CA certificate configmap publisher I0202 05:33:05.116620 1 shared_informer.go:240] Waiting for caches to sync for crt configmap I0202 05:33:05.117968 1 shared_informer.go:240] Waiting for caches to sync for resource quota W0202 05:33:05.132653 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0202 05:33:05.134902 1 shared_informer.go:247] Caches are synced for service account I0202 05:33:05.135953 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0202 05:33:05.146213 1 shared_informer.go:247] Caches are synced for HPA I0202 05:33:05.150475 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0202 05:33:05.158269 1 shared_informer.go:247] Caches are synced for daemon sets I0202 05:33:05.158294 1 shared_informer.go:247] Caches are synced for TTL I0202 05:33:05.158308 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0202 05:33:05.160607 1 shared_informer.go:247] Caches are synced for GC I0202 05:33:05.162859 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0202 05:33:05.162915 1 shared_informer.go:247] Caches are synced for PVC protection I0202 05:33:05.162942 1 shared_informer.go:247] Caches are synced for deployment I0202 05:33:05.162948 1 shared_informer.go:247] Caches are synced for endpoint_slice I0202 05:33:05.176926 1 shared_informer.go:247] Caches are synced for namespace I0202 05:33:05.192713 1 shared_informer.go:247] Caches are synced for ReplicaSet I0202 05:33:05.202928 1 shared_informer.go:247] Caches are synced for PV protection I0202 05:33:05.206278 1 shared_informer.go:247] Caches are synced for ephemeral I0202 05:33:05.208622 1 shared_informer.go:247] Caches are synced for stateful set I0202 05:33:05.208766 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0202 05:33:05.210042 1 shared_informer.go:247] Caches are synced for endpoint I0202 05:33:05.211734 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0202 05:33:05.211774 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0202 05:33:05.211797 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0202 05:33:05.211837 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0202 05:33:05.213063 1 shared_informer.go:247] Caches are synced for TTL after finished I0202 05:33:05.213111 1 shared_informer.go:247] Caches are synced for job I0202 05:33:05.214285 1 shared_informer.go:247] Caches are synced for expand I0202 05:33:05.215572 1 shared_informer.go:247] Caches are synced for cronjob I0202 05:33:05.217025 1 shared_informer.go:247] Caches are synced for crt configmap I0202 05:33:05.219441 1 shared_informer.go:247] Caches are synced for node I0202 05:33:05.219481 1 range_allocator.go:173] Starting range CIDR allocator I0202 05:33:05.219485 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0202 05:33:05.219489 1 shared_informer.go:247] Caches are synced for cidrallocator I0202 05:33:05.223275 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0202 05:33:05.227443 1 shared_informer.go:247] Caches are synced for persistent volume I0202 05:33:05.255070 1 shared_informer.go:247] Caches are synced for ReplicationController I0202 05:33:05.309441 1 shared_informer.go:247] Caches are synced for taint I0202 05:33:05.309568 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0202 05:33:05.309711 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0202 05:33:05.309748 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0202 05:33:05.309824 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0202 05:33:05.309914 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0202 05:33:05.362117 1 shared_informer.go:247] Caches are synced for disruption I0202 05:33:05.362141 1 disruption.go:371] Sending events to api server. I0202 05:33:05.380953 1 shared_informer.go:247] Caches are synced for resource quota I0202 05:33:05.418371 1 shared_informer.go:247] Caches are synced for resource quota I0202 05:33:05.465684 1 shared_informer.go:247] Caches are synced for attach detach I0202 05:33:05.836861 1 shared_informer.go:247] Caches are synced for garbage collector I0202 05:33:05.870768 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vlnzp" I0202 05:33:05.889583 1 shared_informer.go:247] Caches are synced for garbage collector I0202 05:33:05.889634 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0202 05:33:05.970142 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1" I0202 05:33:06.222143 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-wdrpr" * * ==> kube-proxy [90fbfb345a21] <== * I0202 05:33:07.152227 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0202 05:33:07.152291 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0202 05:33:07.152320 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0202 05:33:07.184846 1 server_others.go:206] "Using iptables Proxier" I0202 05:33:07.184888 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0202 05:33:07.184896 1 server_others.go:214] "Creating dualStackProxier for iptables" I0202 05:33:07.184907 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0202 05:33:07.186178 1 server.go:656] "Version info" version="v1.23.1" I0202 05:33:07.187406 1 config.go:317] "Starting service config controller" I0202 05:33:07.188450 1 shared_informer.go:240] Waiting for caches to sync for service config I0202 05:33:07.188815 1 config.go:226] "Starting endpoint slice config controller" I0202 05:33:07.188881 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0202 05:33:07.289025 1 shared_informer.go:247] Caches are synced for endpoint slice config I0202 05:33:07.289065 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [a42448673b54] <== * I0202 05:32:48.481079 1 serving.go:348] Generated self-signed cert in-memory W0202 05:32:50.263632 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0202 05:32:50.263659 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0202 05:32:50.263666 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0202 05:32:50.263671 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0202 05:32:50.277281 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.1" I0202 05:32:50.280193 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0202 05:32:50.280413 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0202 05:32:50.280478 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0202 05:32:50.280641 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0202 05:32:50.282745 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0202 05:32:50.282950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0202 05:32:50.285073 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0202 05:32:50.285114 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0202 05:32:50.285274 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0202 05:32:50.285307 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0202 05:32:50.285431 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0202 05:32:50.285457 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0202 05:32:50.285517 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0202 05:32:50.285542 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0202 05:32:50.285579 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0202 05:32:50.285642 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0202 05:32:50.285731 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0202 05:32:50.285758 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0202 05:32:50.285798 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0202 05:32:50.285827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0202 05:32:50.285907 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0202 05:32:50.285931 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0202 05:32:50.286058 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0202 05:32:50.286091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0202 05:32:50.286187 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0202 05:32:50.286264 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0202 05:32:50.286294 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0202 05:32:50.286323 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0202 05:32:50.286879 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0202 05:32:50.286998 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0202 05:32:50.287223 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0202 05:32:50.287260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0202 05:32:50.287520 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0202 05:32:50.287560 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0202 05:32:51.131604 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0202 05:32:51.131639 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0202 05:32:51.204160 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0202 05:32:51.204188 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0202 05:32:51.220697 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0202 05:32:51.220726 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0202 05:32:51.243536 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0202 05:32:51.243555 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0202 05:32:51.387885 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0202 05:32:51.387936 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0202 05:32:51.780665 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2022-02-02 05:32:35 UTC, end at Wed 2022-02-02 05:33:31 UTC. -- Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.148727 1921 memory_manager.go:168] "Starting memorymanager" policy="None" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.148756 1921 state_mem.go:35] "Initializing new in-memory state store" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.148853 1921 state_mem.go:75] "Updated machine memory state" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.174172 1921 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.174354 1921 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.175048 1921 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.182019 1921 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.182099 1921 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.218090 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.218198 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.218224 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.218251 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.267795 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8bdc344ff0000e961009344b94de59c-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"b8bdc344ff0000e961009344b94de59c\") " pod="kube-system/kube-scheduler-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.267838 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.267887 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.267944 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268014 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268053 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96be69ce9ff7dc0acff6fda2873a009a-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"96be69ce9ff7dc0acff6fda2873a009a\") " pod="kube-system/kube-apiserver-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268073 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96be69ce9ff7dc0acff6fda2873a009a-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"96be69ce9ff7dc0acff6fda2873a009a\") " pod="kube-system/kube-apiserver-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268097 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268122 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96be69ce9ff7dc0acff6fda2873a009a-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"96be69ce9ff7dc0acff6fda2873a009a\") " pod="kube-system/kube-apiserver-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268140 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268174 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-certs\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268188 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-data\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268203 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96be69ce9ff7dc0acff6fda2873a009a-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"96be69ce9ff7dc0acff6fda2873a009a\") " pod="kube-system/kube-apiserver-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268219 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96be69ce9ff7dc0acff6fda2873a009a-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"96be69ce9ff7dc0acff6fda2873a009a\") " pod="kube-system/kube-apiserver-minikube" Feb 02 05:32:53 minikube kubelet[1921]: I0202 05:32:53.268232 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3db91997554714e5ece3296773cf98a5-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"3db91997554714e5ece3296773cf98a5\") " pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:54 minikube kubelet[1921]: I0202 05:32:54.050488 1921 apiserver.go:52] "Watching apiserver" Feb 02 05:32:54 minikube kubelet[1921]: I0202 05:32:54.277290 1921 reconciler.go:157] "Reconciler: start to sync state" Feb 02 05:32:54 minikube kubelet[1921]: E0202 05:32:54.656338 1921 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Feb 02 05:32:54 minikube kubelet[1921]: E0202 05:32:54.860244 1921 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Feb 02 05:32:55 minikube kubelet[1921]: E0202 05:32:55.059353 1921 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Feb 02 05:32:55 minikube kubelet[1921]: I0202 05:32:55.250826 1921 request.go:665] Waited for 1.097440305s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods Feb 02 05:32:55 minikube kubelet[1921]: E0202 05:32:55.259523 1921 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.257651 1921 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.258128 1921 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.258431 1921 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.316209 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.358620 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41545868-c965-4682-9b0d-941dfbf3c4b2-tmp\") pod \"storage-provisioner\" (UID: \"41545868-c965-4682-9b0d-941dfbf3c4b2\") " pod="kube-system/storage-provisioner" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.358663 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8d5\" (UniqueName: \"kubernetes.io/projected/41545868-c965-4682-9b0d-941dfbf3c4b2-kube-api-access-nm8d5\") pod \"storage-provisioner\" (UID: \"41545868-c965-4682-9b0d-941dfbf3c4b2\") " pod="kube-system/storage-provisioner" Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.462933 1921 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.462960 1921 projected.go:199] Error preparing data for projected volume kube-api-access-nm8d5 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.463027 1921 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/41545868-c965-4682-9b0d-941dfbf3c4b2-kube-api-access-nm8d5 podName:41545868-c965-4682-9b0d-941dfbf3c4b2 nodeName:}" failed. No retries permitted until 2022-02-02 05:33:05.962997382 +0000 UTC m=+13.096535944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nm8d5" (UniqueName: "kubernetes.io/projected/41545868-c965-4682-9b0d-941dfbf3c4b2-kube-api-access-nm8d5") pod "storage-provisioner" (UID: "41545868-c965-4682-9b0d-941dfbf3c4b2") : configmap "kube-root-ca.crt" not found Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.873753 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.963065 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3085c59e-ce21-432e-a133-6d1281d8af02-xtables-lock\") pod \"kube-proxy-vlnzp\" (UID: \"3085c59e-ce21-432e-a133-6d1281d8af02\") " pod="kube-system/kube-proxy-vlnzp" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.963376 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3085c59e-ce21-432e-a133-6d1281d8af02-kube-proxy\") pod \"kube-proxy-vlnzp\" (UID: \"3085c59e-ce21-432e-a133-6d1281d8af02\") " pod="kube-system/kube-proxy-vlnzp" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.963467 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3085c59e-ce21-432e-a133-6d1281d8af02-lib-modules\") pod \"kube-proxy-vlnzp\" (UID: \"3085c59e-ce21-432e-a133-6d1281d8af02\") " pod="kube-system/kube-proxy-vlnzp" Feb 02 05:33:05 minikube kubelet[1921]: I0202 05:33:05.963545 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pww7\" (UniqueName: \"kubernetes.io/projected/3085c59e-ce21-432e-a133-6d1281d8af02-kube-api-access-7pww7\") pod \"kube-proxy-vlnzp\" (UID: \"3085c59e-ce21-432e-a133-6d1281d8af02\") " pod="kube-system/kube-proxy-vlnzp" Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.963327 1921 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.963681 1921 projected.go:199] Error preparing data for projected volume kube-api-access-nm8d5 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Feb 02 05:33:05 minikube kubelet[1921]: E0202 05:33:05.963767 1921 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/41545868-c965-4682-9b0d-941dfbf3c4b2-kube-api-access-nm8d5 podName:41545868-c965-4682-9b0d-941dfbf3c4b2 nodeName:}" failed. No retries permitted until 2022-02-02 05:33:06.963751396 +0000 UTC m=+14.097289966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nm8d5" (UniqueName: "kubernetes.io/projected/41545868-c965-4682-9b0d-941dfbf3c4b2-kube-api-access-nm8d5") pod "storage-provisioner" (UID: "41545868-c965-4682-9b0d-941dfbf3c4b2") : configmap "kube-root-ca.crt" not found Feb 02 05:33:06 minikube kubelet[1921]: E0202 05:33:06.070780 1921 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 02 05:33:06 minikube kubelet[1921]: E0202 05:33:06.070807 1921 projected.go:199] Error preparing data for projected volume kube-api-access-7pww7 for pod kube-system/kube-proxy-vlnzp: configmap "kube-root-ca.crt" not found Feb 02 05:33:06 minikube kubelet[1921]: E0202 05:33:06.070866 1921 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/3085c59e-ce21-432e-a133-6d1281d8af02-kube-api-access-7pww7 podName:3085c59e-ce21-432e-a133-6d1281d8af02 nodeName:}" failed. No retries permitted until 2022-02-02 05:33:06.570849914 +0000 UTC m=+13.704388476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7pww7" (UniqueName: "kubernetes.io/projected/3085c59e-ce21-432e-a133-6d1281d8af02-kube-api-access-7pww7") pod "kube-proxy-vlnzp" (UID: "3085c59e-ce21-432e-a133-6d1281d8af02") : configmap "kube-root-ca.crt" not found Feb 02 05:33:06 minikube kubelet[1921]: I0202 05:33:06.246816 1921 topology_manager.go:200] "Topology Admit Handler" Feb 02 05:33:06 minikube kubelet[1921]: I0202 05:33:06.266039 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/687bea75-c1fd-4e51-b8bc-fdb0ad55b239-config-volume\") pod \"coredns-64897985d-wdrpr\" (UID: \"687bea75-c1fd-4e51-b8bc-fdb0ad55b239\") " pod="kube-system/coredns-64897985d-wdrpr" Feb 02 05:33:06 minikube kubelet[1921]: I0202 05:33:06.266098 1921 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/687bea75-c1fd-4e51-b8bc-fdb0ad55b239-kube-api-access-ln56z\") pod \"coredns-64897985d-wdrpr\" (UID: \"687bea75-c1fd-4e51-b8bc-fdb0ad55b239\") " pod="kube-system/coredns-64897985d-wdrpr" Feb 02 05:33:06 minikube kubelet[1921]: I0202 05:33:06.870569 1921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-wdrpr through plugin: invalid network status for" Feb 02 05:33:07 minikube kubelet[1921]: I0202 05:33:07.276343 1921 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ab4c1e647a627c00cc821067943888e8ade49cefaeff56811c3a28f3200d8bb5" Feb 02 05:33:07 minikube kubelet[1921]: I0202 05:33:07.279941 1921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-wdrpr through plugin: invalid network status for" * * ==> storage-provisioner [2ffd7bf1a973] <== * I0202 05:33:07.423070 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0202 05:33:07.435732 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0202 05:33:07.436181 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0202 05:33:07.441465 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0202 05:33:07.441567 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_7b4f64de-b741-4ed9-925a-a827416d34ae! I0202 05:33:07.441629 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b08c951-cae4-4383-b02e-e9a085640fe1", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_7b4f64de-b741-4ed9-925a-a827416d34ae became leader I0202 05:33:07.541956 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_7b4f64de-b741-4ed9-925a-a827416d34ae!