* * ==> Audit <== * |---------|-----------------|----------|------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-----------------|----------|------|---------|-------------------------------|-------------------------------| | start | | minikube | me | v1.24.0 | Mon, 27 Dec 2021 14:52:34 CET | Mon, 27 Dec 2021 15:12:15 CET | | start | | minikube | me | v1.24.0 | Tue, 28 Dec 2021 10:12:25 CET | Tue, 28 Dec 2021 10:12:33 CET | | addons | disable ingress | minikube | me | v1.24.0 | Tue, 28 Dec 2021 10:18:22 CET | Tue, 28 Dec 2021 10:18:22 CET | | addons | disable ingress | minikube | me | v1.24.0 | Tue, 28 Dec 2021 10:54:45 CET | Tue, 28 Dec 2021 10:54:45 CET | |---------|-----------------|----------|------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/12/28 10:12:25 Running on machine: asus Binary: Built with gc go1.17.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I1228 10:12:25.152508 78229 out.go:297] Setting OutFile to fd 1 ... I1228 10:12:25.153048 78229 out.go:349] isatty.IsTerminal(1) = true I1228 10:12:25.153060 78229 out.go:310] Setting ErrFile to fd 2... I1228 10:12:25.153075 78229 out.go:349] isatty.IsTerminal(2) = true I1228 10:12:25.153385 78229 root.go:313] Updating PATH: /home/me/.minikube/bin W1228 10:12:25.153651 78229 root.go:291] Error reading config file at /home/me/.minikube/config/config.json: open /home/me/.minikube/config/config.json: no such file or directory I1228 10:12:25.154000 78229 out.go:304] Setting JSON to false I1228 10:12:25.170812 78229 start.go:112] hostinfo: {"hostname":"asus","uptime":69724,"bootTime":1640613021,"procs":442,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"35","kernelVersion":"5.15.10-200.fc35.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"ce0c75a6-5a31-4eb8-8479-aeea01d575b9"} I1228 10:12:25.171062 78229 start.go:122] virtualization: kvm host I1228 10:12:25.179431 78229 out.go:176] ๐Ÿ˜„ minikube v1.24.0 on Fedora 35 I1228 10:12:25.179888 78229 notify.go:174] Checking for updates... I1228 10:12:25.181311 78229 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I1228 10:12:25.181417 78229 driver.go:343] Setting default libvirt URI to qemu:///system I1228 10:12:25.690499 78229 docker.go:132] docker version: linux-20.10.12 I1228 10:12:25.690673 78229 cli_runner.go:115] Run: docker system info --format "{{json .}}" I1228 10:12:26.146407 78229 info.go:263] docker info: {ID:I5NU:MO65:VDA6:IQFP:J46S:2MUY:T2LA:NATJ:SB74:JRCL:PCCX:CQMW Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:23 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:42 SystemTime:2021-12-28 10:12:25.760508733 +0100 CET LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.10-200.fc35.x86_64 OperatingSystem:Fedora Linux 35 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8201457664 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:asus Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I1228 10:12:26.146651 78229 docker.go:237] overlay module found I1228 10:12:26.152452 78229 out.go:176] โœจ Using the docker driver based on existing profile I1228 10:12:26.152538 78229 start.go:280] selected driver: docker I1228 10:12:26.152552 78229 start.go:762] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[IngressController:ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef KubeWebhookCertgenCreate:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 KubeWebhookCertgenPatch:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/me:/minikube-host} I1228 10:12:26.152771 78229 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I1228 10:12:26.153262 78229 cli_runner.go:115] Run: docker system info --format "{{json .}}" I1228 10:12:26.398771 78229 info.go:263] docker info: {ID:I5NU:MO65:VDA6:IQFP:J46S:2MUY:T2LA:NATJ:SB74:JRCL:PCCX:CQMW Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:23 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:42 SystemTime:2021-12-28 10:12:26.233509818 +0100 CET LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.10-200.fc35.x86_64 OperatingSystem:Fedora Linux 35 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8201457664 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:asus Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I1228 10:12:26.412162 78229 cni.go:93] Creating CNI manager for "" I1228 10:12:26.412178 78229 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I1228 10:12:26.412188 78229 start_flags.go:282] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[IngressController:ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef KubeWebhookCertgenCreate:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 KubeWebhookCertgenPatch:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/me:/minikube-host} I1228 10:12:26.420718 78229 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I1228 10:12:26.420808 78229 cache.go:118] Beginning downloading kic base image for docker with docker I1228 10:12:26.423215 78229 out.go:176] ๐Ÿšœ Pulling base image ... I1228 10:12:26.423276 78229 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker I1228 10:12:26.423334 78229 preload.go:148] Found local preload: /home/me/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 I1228 10:12:26.423348 78229 cache.go:57] Caching tarball of preloaded images I1228 10:12:26.423379 78229 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon I1228 10:12:26.423721 78229 preload.go:174] Found /home/me/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I1228 10:12:26.423768 78229 cache.go:60] Finished verifying existence of preloaded tar for v1.22.3 on docker I1228 10:12:26.423923 78229 profile.go:147] Saving config to /home/me/.minikube/profiles/minikube/config.json ... I1228 10:12:26.545028 78229 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull I1228 10:12:26.545066 78229 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load I1228 10:12:26.545085 78229 cache.go:206] Successfully downloaded all kic artifacts I1228 10:12:26.545123 78229 start.go:313] acquiring machines lock for minikube: {Name:mkdd194a767853f46d15047b5fcc66a09539b07a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1228 10:12:26.545333 78229 start.go:317] acquired machines lock for "minikube" in 182.024ยตs I1228 10:12:26.545370 78229 start.go:93] Skipping create...Using existing machine configuration I1228 10:12:26.545380 78229 fix.go:55] fixHost starting: I1228 10:12:26.546087 78229 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1228 10:12:26.627662 78229 fix.go:108] recreateIfNeeded on minikube: state=Running err= W1228 10:12:26.627754 78229 fix.go:134] unexpected machine state, will restart: I1228 10:12:26.634091 78229 out.go:176] ๐Ÿƒ Updating the running docker "minikube" container ... I1228 10:12:26.634141 78229 machine.go:88] provisioning docker machine ... I1228 10:12:26.634162 78229 ubuntu.go:169] provisioning hostname "minikube" I1228 10:12:26.634242 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:26.708720 78229 main.go:130] libmachine: Using SSH client type: native I1228 10:12:26.713977 78229 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0280] 0x7a3360 [] 0s} 127.0.0.1 49157 } I1228 10:12:26.713993 78229 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1228 10:12:26.979272 78229 main.go:130] libmachine: SSH cmd err, output: : minikube I1228 10:12:26.979389 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:27.031268 78229 main.go:130] libmachine: Using SSH client type: native I1228 10:12:27.031481 78229 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0280] 0x7a3360 [] 0s} 127.0.0.1 49157 } I1228 10:12:27.031498 78229 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1228 10:12:27.173031 78229 main.go:130] libmachine: SSH cmd err, output: : I1228 10:12:27.173067 78229 ubuntu.go:175] set auth options {CertDir:/home/me/.minikube CaCertPath:/home/me/.minikube/certs/ca.pem CaPrivateKeyPath:/home/me/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/me/.minikube/machines/server.pem ServerKeyPath:/home/me/.minikube/machines/server-key.pem ClientKeyPath:/home/me/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/me/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/me/.minikube} I1228 10:12:27.173117 78229 ubuntu.go:177] setting up certificates I1228 10:12:27.173133 78229 provision.go:83] configureAuth start I1228 10:12:27.173271 78229 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1228 10:12:27.229131 78229 provision.go:138] copyHostCerts I1228 10:12:27.229197 78229 exec_runner.go:144] found /home/me/.minikube/ca.pem, removing ... I1228 10:12:27.229205 78229 exec_runner.go:207] rm: /home/me/.minikube/ca.pem I1228 10:12:27.229623 78229 exec_runner.go:151] cp: /home/me/.minikube/certs/ca.pem --> /home/me/.minikube/ca.pem (1066 bytes) I1228 10:12:27.229836 78229 exec_runner.go:144] found /home/me/.minikube/cert.pem, removing ... I1228 10:12:27.229840 78229 exec_runner.go:207] rm: /home/me/.minikube/cert.pem I1228 10:12:27.229873 78229 exec_runner.go:151] cp: /home/me/.minikube/certs/cert.pem --> /home/me/.minikube/cert.pem (1111 bytes) I1228 10:12:27.230059 78229 exec_runner.go:144] found /home/me/.minikube/key.pem, removing ... I1228 10:12:27.230063 78229 exec_runner.go:207] rm: /home/me/.minikube/key.pem I1228 10:12:27.230098 78229 exec_runner.go:151] cp: /home/me/.minikube/certs/key.pem --> /home/me/.minikube/key.pem (1679 bytes) I1228 10:12:27.230181 78229 provision.go:112] generating server cert: /home/me/.minikube/machines/server.pem ca-key=/home/me/.minikube/certs/ca.pem private-key=/home/me/.minikube/certs/ca-key.pem org=me.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I1228 10:12:27.491130 78229 provision.go:172] copyRemoteCerts I1228 10:12:27.491171 78229 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1228 10:12:27.491199 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:27.528948 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:27.622641 78229 ssh_runner.go:319] scp /home/me/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1066 bytes) I1228 10:12:27.659559 78229 ssh_runner.go:319] scp /home/me/.minikube/machines/server.pem --> /etc/docker/server.pem (1188 bytes) I1228 10:12:27.692335 78229 ssh_runner.go:319] scp /home/me/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I1228 10:12:27.726305 78229 provision.go:86] duration metric: configureAuth took 553.148248ms I1228 10:12:27.726345 78229 ubuntu.go:193] setting minikube options for container-runtime I1228 10:12:27.726833 78229 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I1228 10:12:27.726960 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:27.773361 78229 main.go:130] libmachine: Using SSH client type: native I1228 10:12:27.773478 78229 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0280] 0x7a3360 [] 0s} 127.0.0.1 49157 } I1228 10:12:27.773485 78229 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1228 10:12:27.901895 78229 main.go:130] libmachine: SSH cmd err, output: : overlay I1228 10:12:27.901908 78229 ubuntu.go:71] root file system type: overlay I1228 10:12:27.902070 78229 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I1228 10:12:27.902133 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:27.950922 78229 main.go:130] libmachine: Using SSH client type: native I1228 10:12:27.951378 78229 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0280] 0x7a3360 [] 0s} 127.0.0.1 49157 } I1228 10:12:27.951465 78229 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1228 10:12:28.143067 78229 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1228 10:12:28.143158 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:28.194552 78229 main.go:130] libmachine: Using SSH client type: native I1228 10:12:28.195022 78229 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a0280] 0x7a3360 [] 0s} 127.0.0.1 49157 } I1228 10:12:28.195061 78229 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1228 10:12:28.336347 78229 main.go:130] libmachine: SSH cmd err, output: : I1228 10:12:28.336366 78229 machine.go:91] provisioned docker machine in 1.702217986s I1228 10:12:28.336377 78229 start.go:267] post-start starting for "minikube" (driver="docker") I1228 10:12:28.336387 78229 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1228 10:12:28.336524 78229 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1228 10:12:28.336598 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:28.386818 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:28.485370 78229 ssh_runner.go:152] Run: cat /etc/os-release I1228 10:12:28.495440 78229 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1228 10:12:28.495466 78229 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1228 10:12:28.495479 78229 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1228 10:12:28.495484 78229 info.go:137] Remote host: Ubuntu 20.04.2 LTS I1228 10:12:28.495492 78229 filesync.go:126] Scanning /home/me/.minikube/addons for local assets ... I1228 10:12:28.495588 78229 filesync.go:126] Scanning /home/me/.minikube/files for local assets ... I1228 10:12:28.495610 78229 start.go:270] post-start completed in 159.225291ms I1228 10:12:28.495655 78229 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1228 10:12:28.495698 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:28.542106 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:28.635863 78229 fix.go:57] fixHost completed within 2.090483453s I1228 10:12:28.635875 78229 start.go:80] releasing machines lock for "minikube", held for 2.090531584s I1228 10:12:28.635954 78229 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1228 10:12:28.677249 78229 ssh_runner.go:152] Run: systemctl --version I1228 10:12:28.677302 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:28.677306 78229 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/ I1228 10:12:28.677372 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:28.733911 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:28.734166 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:29.337628 78229 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd I1228 10:12:29.392327 78229 ssh_runner.go:152] Run: sudo systemctl cat docker.service I1228 10:12:29.408755 78229 cruntime.go:255] skipping containerd shutdown because we are bound to it I1228 10:12:29.408822 78229 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio I1228 10:12:29.423624 78229 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I1228 10:12:29.446541 78229 ssh_runner.go:152] Run: sudo systemctl unmask docker.service I1228 10:12:29.657571 78229 ssh_runner.go:152] Run: sudo systemctl enable docker.socket I1228 10:12:29.831779 78229 ssh_runner.go:152] Run: sudo systemctl cat docker.service I1228 10:12:29.849177 78229 ssh_runner.go:152] Run: sudo systemctl daemon-reload I1228 10:12:30.039158 78229 ssh_runner.go:152] Run: sudo systemctl start docker I1228 10:12:30.054343 78229 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I1228 10:12:30.254729 78229 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I1228 10:12:30.317851 78229 out.go:203] ๐Ÿณ Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... I1228 10:12:30.317955 78229 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I1228 10:12:30.375347 78229 ssh_runner.go:152] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I1228 10:12:30.380172 78229 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker I1228 10:12:30.380252 78229 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I1228 10:12:30.424387 78229 docker.go:558] Got preloaded images: -- stdout -- k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/kube-apiserver:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3 k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/ingress-nginx/kube-webhook-certgen: kubernetesui/dashboard:v2.3.1 k8s.gcr.io/etcd:3.5.0-0 kubernetesui/metrics-scraper:v1.0.7 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.5 -- /stdout -- I1228 10:12:30.424402 78229 docker.go:489] Images already preloaded, skipping extraction I1228 10:12:30.424452 78229 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I1228 10:12:30.472546 78229 docker.go:558] Got preloaded images: -- stdout -- k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/kube-apiserver:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3 k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/ingress-nginx/kube-webhook-certgen: kubernetesui/dashboard:v2.3.1 k8s.gcr.io/etcd:3.5.0-0 kubernetesui/metrics-scraper:v1.0.7 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.5 -- /stdout -- I1228 10:12:30.472585 78229 cache_images.go:79] Images are preloaded, skipping loading I1228 10:12:30.472657 78229 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}} I1228 10:12:30.927256 78229 cni.go:93] Creating CNI manager for "" I1228 10:12:30.927267 78229 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I1228 10:12:30.927273 78229 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I1228 10:12:30.927283 78229 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1228 10:12:30.927412 78229 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.22.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I1228 10:12:30.927505 78229 kubeadm.go:909] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1228 10:12:30.927567 78229 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3 I1228 10:12:30.973685 78229 binaries.go:44] Found k8s binaries, skipping transfer I1228 10:12:30.973820 78229 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1228 10:12:30.987307 78229 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I1228 10:12:31.022363 78229 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I1228 10:12:31.089716 78229 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes) I1228 10:12:31.156036 78229 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I1228 10:12:31.175010 78229 certs.go:54] Setting up /home/me/.minikube/profiles/minikube for IP: 192.168.49.2 I1228 10:12:31.175230 78229 certs.go:182] skipping minikubeCA CA generation: /home/me/.minikube/ca.key I1228 10:12:31.175833 78229 certs.go:182] skipping proxyClientCA CA generation: /home/me/.minikube/proxy-client-ca.key I1228 10:12:31.176103 78229 certs.go:298] skipping minikube-user signed cert generation: /home/me/.minikube/profiles/minikube/client.key I1228 10:12:31.177109 78229 certs.go:298] skipping minikube signed cert generation: /home/me/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I1228 10:12:31.177358 78229 certs.go:298] skipping aggregator signed cert generation: /home/me/.minikube/profiles/minikube/proxy-client.key I1228 10:12:31.177697 78229 certs.go:388] found cert: /home/me/.minikube/certs/home/me/.minikube/certs/ca-key.pem (1679 bytes) I1228 10:12:31.177851 78229 certs.go:388] found cert: /home/me/.minikube/certs/home/me/.minikube/certs/ca.pem (1066 bytes) I1228 10:12:31.177937 78229 certs.go:388] found cert: /home/me/.minikube/certs/home/me/.minikube/certs/cert.pem (1111 bytes) I1228 10:12:31.178006 78229 certs.go:388] found cert: /home/me/.minikube/certs/home/me/.minikube/certs/key.pem (1679 bytes) I1228 10:12:31.181532 78229 ssh_runner.go:319] scp /home/me/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1228 10:12:31.269305 78229 ssh_runner.go:319] scp /home/me/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I1228 10:12:31.356651 78229 ssh_runner.go:319] scp /home/me/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1228 10:12:31.447612 78229 ssh_runner.go:319] scp /home/me/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I1228 10:12:31.539606 78229 ssh_runner.go:319] scp /home/me/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1228 10:12:31.581085 78229 ssh_runner.go:319] scp /home/me/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I1228 10:12:31.613847 78229 ssh_runner.go:319] scp /home/me/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1228 10:12:31.644440 78229 ssh_runner.go:319] scp /home/me/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I1228 10:12:31.678798 78229 ssh_runner.go:319] scp /home/me/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1228 10:12:31.708975 78229 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I1228 10:12:31.731045 78229 ssh_runner.go:152] Run: openssl version I1228 10:12:31.742277 78229 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1228 10:12:31.756842 78229 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1228 10:12:31.764045 78229 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 27 14:12 /usr/share/ca-certificates/minikubeCA.pem I1228 10:12:31.764097 78229 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1228 10:12:31.772251 78229 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1228 10:12:31.784391 78229 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[IngressController:ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef KubeWebhookCertgenCreate:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 KubeWebhookCertgenPatch:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/me:/minikube-host} I1228 10:12:31.784583 78229 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1228 10:12:31.826993 78229 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1228 10:12:31.837721 78229 kubeadm.go:401] found existing configuration files, will attempt cluster restart I1228 10:12:31.837754 78229 kubeadm.go:600] restartCluster start I1228 10:12:31.837874 78229 ssh_runner.go:152] Run: sudo test -d /data/minikube I1228 10:12:31.847250 78229 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I1228 10:12:31.848126 78229 kubeconfig.go:92] found "minikube" server: "https://192.168.49.2:8443" I1228 10:12:31.850622 78229 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I1228 10:12:31.863399 78229 api_server.go:165] Checking apiserver status ... I1228 10:12:31.863498 78229 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I1228 10:12:31.920295 78229 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup W1228 10:12:31.933269 78229 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup: Process exited with status 1 stdout: stderr: I1228 10:12:31.933280 78229 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I1228 10:12:31.943988 78229 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I1228 10:12:31.988397 78229 system_pods.go:86] 7 kube-system pods found I1228 10:12:31.988440 78229 system_pods.go:89] "coredns-78fcd69978-csp7c" [9242d5db-4177-4a77-9aed-1768eb2f0336] Running I1228 10:12:31.988448 78229 system_pods.go:89] "etcd-minikube" [d72b1d4c-bb22-4244-bb0c-9777ff220bcd] Running I1228 10:12:31.988453 78229 system_pods.go:89] "kube-apiserver-minikube" [21d4592f-a853-4474-914a-f35aa9426dca] Running I1228 10:12:31.988458 78229 system_pods.go:89] "kube-controller-manager-minikube" [3a84cfd7-4cb4-4c6f-b89d-1509ae46c49c] Running I1228 10:12:31.988464 78229 system_pods.go:89] "kube-proxy-6nd2w" [11bf1a7c-8dcb-48e9-aab5-e21b0091e6d7] Running I1228 10:12:31.988469 78229 system_pods.go:89] "kube-scheduler-minikube" [930ce4f2-21e6-4be5-89c1-d5dd0c88e550] Running I1228 10:12:31.988480 78229 system_pods.go:89] "storage-provisioner" [2738977c-e5be-4b15-bc67-08420ffc97eb] Running I1228 10:12:32.000336 78229 api_server.go:140] control plane version: v1.22.3 I1228 10:12:32.000352 78229 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2 I1228 10:12:32.000358 78229 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured I1228 10:12:32.000362 78229 kubeadm.go:604] restartCluster took 162.604366ms I1228 10:12:32.000367 78229 kubeadm.go:392] StartCluster complete in 215.990538ms I1228 10:12:32.000379 78229 settings.go:142] acquiring lock: {Name:mk14a2280a6b8ec984f7279cd585ccf0881240f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1228 10:12:32.000466 78229 settings.go:150] Updating kubeconfig: /home/me/.kube/config I1228 10:12:32.002036 78229 lock.go:35] WriteFile acquiring /home/me/.kube/config: {Name:mk40aea569c5cc1240d02dc67cf4e21e39bd0d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1228 10:12:32.032371 78229 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I1228 10:12:32.032513 78229 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true} I1228 10:12:32.038259 78229 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I1228 10:12:32.032642 78229 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I1228 10:12:32.033003 78229 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[] I1228 10:12:32.038534 78229 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet I1228 10:12:32.033283 78229 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I1228 10:12:32.038580 78229 addons.go:65] Setting storage-provisioner=true in profile "minikube" I1228 10:12:32.038677 78229 addons.go:153] Setting addon storage-provisioner=true in "minikube" W1228 10:12:32.038700 78229 addons.go:165] addon storage-provisioner should already be in state true I1228 10:12:32.038704 78229 addons.go:65] Setting default-storageclass=true in profile "minikube" I1228 10:12:32.038801 78229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I1228 10:12:32.038828 78229 host.go:66] Checking if "minikube" exists ... I1228 10:12:32.039776 78229 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1228 10:12:32.040155 78229 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1228 10:12:32.179816 78229 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I1228 10:12:32.180043 78229 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I1228 10:12:32.180053 78229 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I1228 10:12:32.180121 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:32.199820 78229 addons.go:153] Setting addon default-storageclass=true in "minikube" W1228 10:12:32.199861 78229 addons.go:165] addon default-storageclass should already be in state true I1228 10:12:32.199905 78229 host.go:66] Checking if "minikube" exists ... I1228 10:12:32.200869 78229 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1228 10:12:32.272945 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:32.290520 78229 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I1228 10:12:32.290531 78229 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I1228 10:12:32.290586 78229 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1228 10:12:32.349185 78229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/me/.minikube/machines/minikube/id_rsa Username:docker} I1228 10:12:32.409904 78229 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I1228 10:12:32.464109 78229 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I1228 10:12:32.599798 78229 start.go:719] CoreDNS already contains "host.minikube.internal" host record, skipping... I1228 10:12:32.599826 78229 api_server.go:51] waiting for apiserver process to appear ... I1228 10:12:32.599877 78229 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I1228 10:12:32.883145 78229 api_server.go:71] duration metric: took 850.561213ms to wait for apiserver process to appear ... I1228 10:12:32.883202 78229 api_server.go:87] waiting for apiserver healthz status ... I1228 10:12:32.883219 78229 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I1228 10:12:32.887413 78229 out.go:176] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I1228 10:12:32.887461 78229 addons.go:417] enableAddons completed in 854.782588ms I1228 10:12:32.889961 78229 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I1228 10:12:32.890688 78229 api_server.go:140] control plane version: v1.22.3 I1228 10:12:32.890696 78229 api_server.go:130] duration metric: took 7.489321ms to wait for apiserver health ... I1228 10:12:32.890701 78229 system_pods.go:43] waiting for kube-system pods to appear ... I1228 10:12:32.896445 78229 system_pods.go:59] 7 kube-system pods found I1228 10:12:32.896484 78229 system_pods.go:61] "coredns-78fcd69978-csp7c" [9242d5db-4177-4a77-9aed-1768eb2f0336] Running I1228 10:12:32.896489 78229 system_pods.go:61] "etcd-minikube" [d72b1d4c-bb22-4244-bb0c-9777ff220bcd] Running I1228 10:12:32.896494 78229 system_pods.go:61] "kube-apiserver-minikube" [21d4592f-a853-4474-914a-f35aa9426dca] Running I1228 10:12:32.896498 78229 system_pods.go:61] "kube-controller-manager-minikube" [3a84cfd7-4cb4-4c6f-b89d-1509ae46c49c] Running I1228 10:12:32.896502 78229 system_pods.go:61] "kube-proxy-6nd2w" [11bf1a7c-8dcb-48e9-aab5-e21b0091e6d7] Running I1228 10:12:32.896506 78229 system_pods.go:61] "kube-scheduler-minikube" [930ce4f2-21e6-4be5-89c1-d5dd0c88e550] Running I1228 10:12:32.896509 78229 system_pods.go:61] "storage-provisioner" [2738977c-e5be-4b15-bc67-08420ffc97eb] Running I1228 10:12:32.896532 78229 system_pods.go:74] duration metric: took 5.824927ms to wait for pod list to return data ... I1228 10:12:32.896545 78229 kubeadm.go:547] duration metric: took 863.978951ms to wait for : map[apiserver:true system_pods:true] ... I1228 10:12:32.896570 78229 node_conditions.go:102] verifying NodePressure condition ... I1228 10:12:32.904005 78229 node_conditions.go:122] node storage ephemeral capacity is 165676Mi I1228 10:12:32.904026 78229 node_conditions.go:123] node cpu capacity is 8 I1228 10:12:32.904038 78229 node_conditions.go:105] duration metric: took 7.462845ms to run NodePressure ... I1228 10:12:32.904051 78229 start.go:234] waiting for startup goroutines ... I1228 10:12:33.036809 78229 start.go:473] kubectl: 1.23.1, cluster: 1.22.3 (minor skew: 1) I1228 10:12:33.039291 78229 out.go:176] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Mon 2021-12-27 14:11:51 UTC, end at Tue 2021-12-28 11:04:18 UTC. -- Dec 27 14:11:51 minikube systemd[1]: Starting Docker Application Container Engine... Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.146641016Z" level=info msg="Starting up" Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.148268486Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.148295172Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.148335297Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.148352689Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.336397603Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.336512063Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.336610744Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.336771610Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.705121330Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.730828706Z" level=info msg="Loading containers: start." Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.835713272Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 27 14:11:52 minikube dockerd[145]: time="2021-12-27T14:11:52.912879263Z" level=info msg="Loading containers: done." Dec 27 14:11:53 minikube dockerd[145]: time="2021-12-27T14:11:53.414459861Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Dec 27 14:11:53 minikube dockerd[145]: time="2021-12-27T14:11:53.414547150Z" level=info msg="Daemon has completed initialization" Dec 27 14:11:53 minikube systemd[1]: Started Docker Application Container Engine. Dec 27 14:11:53 minikube dockerd[145]: time="2021-12-27T14:11:53.499610076Z" level=info msg="API listen on /run/docker.sock" Dec 27 14:11:57 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Dec 27 14:11:57 minikube systemd[1]: Stopping Docker Application Container Engine... Dec 27 14:11:57 minikube dockerd[145]: time="2021-12-27T14:11:57.759805400Z" level=info msg="Processing signal 'terminated'" Dec 27 14:11:57 minikube dockerd[145]: time="2021-12-27T14:11:57.760738659Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Dec 27 14:11:57 minikube dockerd[145]: time="2021-12-27T14:11:57.767128675Z" level=info msg="Daemon shutdown complete" Dec 27 14:11:57 minikube systemd[1]: docker.service: Succeeded. Dec 27 14:11:57 minikube systemd[1]: Stopped Docker Application Container Engine. Dec 27 14:11:57 minikube systemd[1]: Starting Docker Application Container Engine... Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.845831479Z" level=info msg="Starting up" Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.847234583Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.847252738Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.847277404Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.847291502Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.848038015Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.848057421Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.848069181Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.848080994Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.963928688Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Dec 27 14:11:57 minikube dockerd[381]: time="2021-12-27T14:11:57.979154504Z" level=info msg="Loading containers: start." Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.300551866Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.397505550Z" level=info msg="Loading containers: done." Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.521046847Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.521261403Z" level=info msg="Daemon has completed initialization" Dec 27 14:11:58 minikube systemd[1]: Started Docker Application Container Engine. Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.565390071Z" level=info msg="API listen on [::]:2376" Dec 27 14:11:58 minikube dockerd[381]: time="2021-12-27T14:11:58.571121621Z" level=info msg="API listen on /var/run/docker.sock" Dec 27 14:31:24 minikube dockerd[381]: time="2021-12-27T14:31:24.891755580Z" level=warning msg="reference for unknown type: " digest="sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" remote="k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" Dec 27 14:31:59 minikube dockerd[381]: time="2021-12-27T14:31:59.570636162Z" level=info msg="ignoring event" container=a837b5da2a9fce0842f9a5f13950d9ce7a5b4f04380cdf37184fedb4f327ac3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 27 14:31:59 minikube dockerd[381]: time="2021-12-27T14:31:59.687480648Z" level=info msg="ignoring event" container=2444ba91c2b4c92afd1abb36b04653d7cd6e6c27cab63be4376327c8410c58cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 27 14:32:00 minikube dockerd[381]: time="2021-12-27T14:32:00.192548412Z" level=info msg="ignoring event" container=017e9e185481be7264966482265d2a12ac95ae4b30bc500fd00e0033a294efa0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 27 14:32:00 minikube dockerd[381]: time="2021-12-27T14:32:00.700710195Z" level=info msg="ignoring event" container=7021a68087b177e5a3dcd133770193249e8929c31309553056e05eb1ef8068c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 27 14:32:06 minikube dockerd[381]: time="2021-12-27T14:32:06.555849937Z" level=warning msg="reference for unknown type: " digest="sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef" remote="k8s.gcr.io/ingress-nginx/controller@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef" Dec 27 14:36:55 minikube dockerd[381]: time="2021-12-27T14:36:55.809562576Z" level=warning msg="reference for unknown type: " digest="sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a" remote="k8s.gcr.io/ingress-nginx/controller@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a" Dec 27 14:38:25 minikube dockerd[381]: time="2021-12-27T14:38:25.197710715Z" level=info msg="ignoring event" container=d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 27 14:38:25 minikube dockerd[381]: time="2021-12-27T14:38:25.374803822Z" level=info msg="ignoring event" container=ba357df150a7409196e1a203928036d39d53f0433fc7bb1f3353fdc7e2e3ab89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 488e97f6f69da k8s.gcr.io/ingress-nginx/controller@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef 20 hours ago Running controller 0 0856cd59daf7b 017e9e185481b k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 21 hours ago Exited patch 0 7021a68087b17 a837b5da2a9fc k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 21 hours ago Exited create 0 2444ba91c2b4c 7c7b7ab07d0e5 6e38f40d628db 21 hours ago Running storage-provisioner 0 a5f2c053b5345 98cfde5d5b4b2 8d147537fb7d1 21 hours ago Running coredns 0 1b96426117c71 2b02efeb2bc30 6120bd723dced 21 hours ago Running kube-proxy 0 a028135da7e0a 1dff91e080d10 0aa9c7e31d307 21 hours ago Running kube-scheduler 0 62d5d72a31ce5 aa78a9fc27be2 53224b502ea4d 21 hours ago Running kube-apiserver 0 314fff3ba1982 75ffbaad6d654 05c905cef780c 21 hours ago Running kube-controller-manager 0 87e9ddd34ac21 3ef0c9c2a1066 0048118155842 21 hours ago Running etcd 0 5d2e6f3b0ee2d * * ==> coredns [98cfde5d5b4b] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.4 linux/amd64, go1.16.4, 053c4d5 * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_12_27T15_12_13_0700 minikube.k8s.io/version=v1.24.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 27 Dec 2021 14:12:10 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Tue, 28 Dec 2021 11:04:09 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 28 Dec 2021 11:02:27 +0000 Mon, 27 Dec 2021 14:12:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 28 Dec 2021 11:02:27 +0000 Mon, 27 Dec 2021 14:12:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 28 Dec 2021 11:02:27 +0000 Mon, 27 Dec 2021 14:12:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 28 Dec 2021 11:02:27 +0000 Mon, 27 Dec 2021 14:12:10 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 8 ephemeral-storage: 165676Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8009236Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 165676Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8009236Ki pods: 110 System Info: Machine ID: bba0be70c47c400ea3cf7733f1c0b4c1 System UUID: 199c1f1f-b05b-41a1-8b9e-524a557e8fdd Boot ID: c46bfdc5-4915-4751-a0e4-c6f9aa3a5751 Kernel Version: 5.15.10-200.fc35.x86_64 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.22.3 Kube-Proxy Version: v1.22.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- ingress-nginx ingress-nginx-controller-5f66978484-4jnjs 100m (1%!)(MISSING) 0 (0%!)(MISSING) 90Mi (1%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system coredns-78fcd69978-csp7c 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 20h kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system kube-proxy-6nd2w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20h kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%!)(MISSING) 0 (0%!)(MISSING) memory 260Mi (3%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: * * ==> dmesg <== * [Dec27 13:50] [Firmware Bug]: TSC ADJUST: CPU0: -16418815 force to 0 [ +0.000000] x86/cpu: SGX disabled by BIOS. [ -0.191629] [Firmware Bug]: TSC ADJUST differs within socket(s), fixing all errors [ +0.196050] #2 #3 #4 [ +0.008063] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ +0.000000] #5 #6 #7 [ +0.006825] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ +0.474005] hpet_acpi_add: no address or irqs in _CRS [ +0.049792] i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp [ +0.002792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ +0.001376] efifb: Ignoring BGRT: unexpected or invalid BMP data [ +0.697867] ipmi_si: Unable to find any System Interface(s) [ +2.346603] sd 3:0:0:0: [sdb] Optimal transfer size 33553920 bytes not a multiple of physical block size (4096 bytes) [ +0.012058] systemd-sysv-generator[573]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000054] systemd-sysv-generator[573]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.267602] ipmi_si: Unable to find any System Interface(s) [ +0.002316] systemd-journald[593]: File /run/log/journal/ce0c75a65a314eb88479aeea01d575b9/system.journal corrupted or uncleanly shut down, renaming and replacing. [ +0.078805] systemd-journald[593]: File /var/log/journal/ce0c75a65a314eb88479aeea01d575b9/system.journal corrupted or uncleanly shut down, renaming and replacing. [ +0.901626] thermal thermal_zone6: failed to read out thermal zone (-61) [ +0.082530] snd_hda_codec_hdmi hdaudioC0D2: Monitor plugged-in, Failed to power up codec ret=[-13] [ +0.294054] kauditd_printk_skb: 99 callbacks suppressed [ +0.969254] Bluetooth: hci0: Reading supported features failed (-16) [ +15.605214] systemd-journald[593]: File /var/log/journal/ce0c75a65a314eb88479aeea01d575b9/user-1000.journal corrupted or uncleanly shut down, renaming and replacing. [ +2.255426] ntfs3: Unknown parameter 'windows_names' [ +0.000767] ntfs3: Unknown parameter 'windows_names' [ +0.021105] ntfs3: Unknown parameter 'windows_names' [Jul18 13:24] [Firmware Bug]: TSC ADJUST differs: CPU0 0 --> -15980837. Restoring [Dec27 15:19] done. [ +1.696265] Bluetooth: hci0: Reading supported features failed (-16) [ -2.418640] [Firmware Bug]: TSC ADJUST differs: CPU0 0 --> -16611012. Restoring [Dec27 15:53] done. [ +1.763269] Bluetooth: hci0: Reading supported features failed (-16) [Dec27 16:26] show_signal_msg: 2 callbacks suppressed [Dec27 15:53] [Firmware Bug]: TSC ADJUST differs: CPU0 0 --> -15944134. Restoring [Dec27 17:14] done. [ +1.574665] Bluetooth: hci0: Reading supported features failed (-16) [Dec27 17:15] hrtimer: interrupt took 148565 ns * * ==> etcd [3ef0c9c2a106] <== * {"level":"info","ts":"2021-12-27T20:52:49.674Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8410,"took":"1.518779ms"} {"level":"info","ts":"2021-12-27T20:57:49.691Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8658} {"level":"info","ts":"2021-12-27T20:57:49.693Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8658,"took":"1.89101ms"} {"level":"info","ts":"2021-12-27T21:02:49.708Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8907} {"level":"info","ts":"2021-12-27T21:02:49.712Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8907,"took":"2.248992ms"} {"level":"info","ts":"2021-12-27T21:07:49.728Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":9157} {"level":"info","ts":"2021-12-27T21:07:49.731Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":9157,"took":"2.075689ms"} {"level":"info","ts":"2021-12-28T09:08:58.221Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":9406} {"level":"info","ts":"2021-12-28T09:08:58.245Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":9406,"took":"22.766196ms"} {"level":"warn","ts":"2021-12-28T09:11:27.053Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.382318ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2021-12-28T09:11:27.053Z","caller":"traceutil/trace.go:171","msg":"trace[1229689596] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:9778; }","duration":"107.227751ms","start":"2021-12-28T09:11:26.946Z","end":"2021-12-28T09:11:27.053Z","steps":["trace[1229689596] 'range keys from in-memory index tree' (duration: 105.337409ms)"],"step_count":1} {"level":"info","ts":"2021-12-28T09:13:58.242Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":9654} {"level":"info","ts":"2021-12-28T09:13:58.248Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":9654,"took":"4.786792ms"} {"level":"info","ts":"2021-12-28T09:18:58.258Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":9906} {"level":"info","ts":"2021-12-28T09:18:58.260Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":9906,"took":"1.289677ms"} {"level":"info","ts":"2021-12-28T09:23:58.270Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":10153} {"level":"info","ts":"2021-12-28T09:23:58.273Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":10153,"took":"2.701219ms"} {"level":"info","ts":"2021-12-28T09:26:45.663Z","caller":"traceutil/trace.go:171","msg":"trace[1343689612] linearizableReadLoop","detail":"{readStateIndex:13031; appliedIndex:13031; }","duration":"115.094383ms","start":"2021-12-28T09:26:45.547Z","end":"2021-12-28T09:26:45.662Z","steps":["trace[1343689612] 'read index received' (duration: 115.075427ms)","trace[1343689612] 'applied index is now lower than readState.Index' (duration: 16.454ยตs)"],"step_count":2} {"level":"warn","ts":"2021-12-28T09:26:45.694Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.673405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2021-12-28T09:26:45.694Z","caller":"traceutil/trace.go:171","msg":"trace[240860268] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:10542; }","duration":"146.534659ms","start":"2021-12-28T09:26:45.547Z","end":"2021-12-28T09:26:45.694Z","steps":["trace[240860268] 'agreement among raft nodes before linearized reading' (duration: 115.214477ms)"],"step_count":1} {"level":"info","ts":"2021-12-28T09:28:58.287Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":10402} {"level":"info","ts":"2021-12-28T09:28:58.289Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":10402,"took":"1.652897ms"} {"level":"info","ts":"2021-12-28T09:33:58.302Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":10652} {"level":"info","ts":"2021-12-28T09:33:58.305Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":10652,"took":"1.744863ms"} {"level":"info","ts":"2021-12-28T09:38:58.336Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":10901} {"level":"info","ts":"2021-12-28T09:38:58.338Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":10901,"took":"1.37998ms"} {"level":"info","ts":"2021-12-28T09:43:58.352Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":11150} {"level":"info","ts":"2021-12-28T09:43:58.354Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":11150,"took":"1.592494ms"} {"level":"info","ts":"2021-12-28T09:48:58.373Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":11400} {"level":"info","ts":"2021-12-28T09:48:58.375Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":11400,"took":"1.802826ms"} {"level":"info","ts":"2021-12-28T09:53:58.392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":11648} {"level":"info","ts":"2021-12-28T09:53:58.395Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":11648,"took":"2.31152ms"} {"level":"info","ts":"2021-12-28T09:58:58.407Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":11897} {"level":"info","ts":"2021-12-28T09:58:58.409Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":11897,"took":"2.176098ms"} {"level":"info","ts":"2021-12-28T10:03:58.422Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":12147} {"level":"info","ts":"2021-12-28T10:03:58.432Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":12147,"took":"9.061292ms"} {"level":"info","ts":"2021-12-28T10:08:58.438Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":12396} {"level":"info","ts":"2021-12-28T10:08:58.441Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":12396,"took":"1.950945ms"} {"level":"info","ts":"2021-12-28T10:13:58.455Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":12645} {"level":"info","ts":"2021-12-28T10:13:58.457Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":12645,"took":"1.504815ms"} {"level":"info","ts":"2021-12-28T10:18:58.469Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":12895} {"level":"info","ts":"2021-12-28T10:18:58.485Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":12895,"took":"15.795375ms"} {"level":"info","ts":"2021-12-28T10:23:58.485Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":13143} {"level":"info","ts":"2021-12-28T10:23:58.489Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":13143,"took":"2.58051ms"} {"level":"info","ts":"2021-12-28T10:28:58.503Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":13393} {"level":"info","ts":"2021-12-28T10:28:58.505Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":13393,"took":"1.399645ms"} {"level":"info","ts":"2021-12-28T10:33:58.517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":13643} {"level":"info","ts":"2021-12-28T10:33:58.518Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":13643,"took":"510.601ยตs"} {"level":"info","ts":"2021-12-28T10:38:58.533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":13892} {"level":"info","ts":"2021-12-28T10:38:58.536Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":13892,"took":"1.877183ms"} {"level":"info","ts":"2021-12-28T10:43:58.545Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":14141} {"level":"info","ts":"2021-12-28T10:43:58.546Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":14141,"took":"381.474ยตs"} {"level":"info","ts":"2021-12-28T10:48:58.562Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":14391} {"level":"info","ts":"2021-12-28T10:48:58.565Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":14391,"took":"1.966637ms"} {"level":"info","ts":"2021-12-28T10:53:58.578Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":14640} {"level":"info","ts":"2021-12-28T10:53:58.580Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":14640,"took":"1.350801ms"} {"level":"info","ts":"2021-12-28T10:58:58.592Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":14889} {"level":"info","ts":"2021-12-28T10:58:58.595Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":14889,"took":"1.790902ms"} {"level":"info","ts":"2021-12-28T11:03:58.608Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":15138} {"level":"info","ts":"2021-12-28T11:03:58.610Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":15138,"took":"1.363515ms"} * * ==> kernel <== * 11:04:19 up 21:13, 0 users, load average: 0.94, 1.52, 1.41 Linux minikube 5.15.10-200.fc35.x86_64 #1 SMP Fri Dec 17 14:46:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [aa78a9fc27be] <== * I1227 14:12:10.748989 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1227 14:12:10.748999 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1227 14:12:10.749528 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I1227 14:12:10.749604 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I1227 14:12:10.753806 1 available_controller.go:491] Starting AvailableConditionController I1227 14:12:10.753826 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller E1227 14:12:10.754147 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I1227 14:12:10.798487 1 controller.go:611] quota admission added evaluator for: namespaces I1227 14:12:10.847217 1 apf_controller.go:317] Running API Priority and Fairness config worker I1227 14:12:10.848393 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1227 14:12:10.848617 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1227 14:12:10.849600 1 cache.go:39] Caches are synced for autoregister controller I1227 14:12:10.849601 1 shared_informer.go:247] Caches are synced for crd-autoregister I1227 14:12:10.854789 1 cache.go:39] Caches are synced for AvailableConditionController controller I1227 14:12:10.859886 1 shared_informer.go:247] Caches are synced for node_authorizer I1227 14:12:11.746472 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1227 14:12:11.746990 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1227 14:12:11.767542 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I1227 14:12:11.781627 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I1227 14:12:11.781682 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I1227 14:12:12.216400 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1227 14:12:12.250298 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1227 14:12:12.327894 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I1227 14:12:12.329191 1 controller.go:611] quota admission added evaluator for: endpoints I1227 14:12:12.333203 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I1227 14:12:12.816707 1 controller.go:611] quota admission added evaluator for: serviceaccounts I1227 14:12:13.790949 1 controller.go:611] quota admission added evaluator for: deployments.apps I1227 14:12:13.821023 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I1227 14:12:14.072617 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I1227 14:12:26.276064 1 controller.go:611] quota admission added evaluator for: replicasets.apps I1227 14:12:26.437483 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I1227 14:12:28.270561 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io W1227 14:31:17.778386 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted I1227 14:31:23.238454 1 controller.go:611] quota admission added evaluator for: jobs.batch I1227 14:31:49.346554 1 rest.go:387] Transition to non LoadBalancer type service or LoadBalancer type service with ExternalTrafficPolicy=Global W1227 14:54:05.431415 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 15:08:30.784005 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted E1227 19:02:39.844401 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has expired.]" E1227 19:02:39.846162 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has expired.]" W1227 19:09:42.760892 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 19:33:21.869763 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 19:58:53.050863 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 20:11:18.007005 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 20:24:30.551102 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 20:33:38.545602 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 20:45:45.635212 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 20:53:37.705332 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1227 21:09:31.660413 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted E1228 09:06:57.170774 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has expired.]" E1228 09:06:57.171185 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has expired.]" W1228 09:18:54.240183 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 09:26:43.433679 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 09:39:08.850609 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 09:51:27.033471 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:00:56.996618 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:18:22.696484 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:25:22.818954 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:39:06.958498 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:53:19.552348 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W1228 10:59:49.462161 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [75ffbaad6d65] <== * I1227 14:12:25.688611 1 shared_informer.go:247] Caches are synced for taint I1227 14:12:25.688755 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: W1227 14:12:25.688963 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp. I1227 14:12:25.688981 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I1227 14:12:25.689085 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal. I1227 14:12:25.688991 1 shared_informer.go:247] Caches are synced for node I1227 14:12:25.689232 1 range_allocator.go:172] Starting range CIDR allocator I1227 14:12:25.689253 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1227 14:12:25.689271 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I1227 14:12:25.689309 1 shared_informer.go:247] Caches are synced for cidrallocator I1227 14:12:25.702868 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I1227 14:12:25.702991 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1227 14:12:25.720961 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1227 14:12:25.733988 1 shared_informer.go:247] Caches are synced for ephemeral I1227 14:12:25.763902 1 shared_informer.go:247] Caches are synced for persistent volume I1227 14:12:25.766808 1 shared_informer.go:247] Caches are synced for expand I1227 14:12:25.782019 1 shared_informer.go:247] Caches are synced for attach detach I1227 14:12:25.807113 1 shared_informer.go:247] Caches are synced for stateful set I1227 14:12:25.826336 1 shared_informer.go:247] Caches are synced for PVC protection I1227 14:12:25.852315 1 shared_informer.go:247] Caches are synced for resource quota I1227 14:12:25.868233 1 shared_informer.go:247] Caches are synced for job I1227 14:12:25.875636 1 shared_informer.go:247] Caches are synced for resource quota I1227 14:12:25.918308 1 shared_informer.go:247] Caches are synced for TTL after finished I1227 14:12:25.919530 1 shared_informer.go:247] Caches are synced for cronjob I1227 14:12:26.278079 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1" I1227 14:12:26.349747 1 shared_informer.go:247] Caches are synced for garbage collector I1227 14:12:26.374210 1 shared_informer.go:247] Caches are synced for garbage collector I1227 14:12:26.374239 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1227 14:12:26.453624 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6nd2w" I1227 14:12:26.690282 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-csp7c" I1227 14:31:23.164860 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-54bfb9bb to 1" I1227 14:31:23.171427 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-54bfb9bb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-54bfb9bb-7d5rq" I1227 14:31:23.240747 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:23.247503 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:31:23.250457 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:23.250562 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:31:23.250785 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create--1-s4vkn" I1227 14:31:23.250805 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch--1-bs4jv" I1227 14:31:23.256891 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:23.260876 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:23.261260 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:31:23.265157 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:31:23.281782 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:23.288990 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:31:49.348920 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Service" apiVersion="v1" type="Normal" reason="Type" message="LoadBalancer -> NodePort" I1227 14:31:49.379618 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-5f66978484 to 1" I1227 14:31:49.385738 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-5f66978484" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-5f66978484-4jnjs" I1227 14:31:59.625504 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:31:59.625775 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I1227 14:31:59.640774 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create I1227 14:32:00.667694 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:32:00.668307 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I1227 14:32:00.676223 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch I1227 14:37:09.454993 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set ingress-nginx-controller-54bfb9bb to 0" I1227 14:37:09.470751 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-54bfb9bb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: ingress-nginx-controller-54bfb9bb-7d5rq" W1227 19:02:39.846693 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized E1227 19:02:39.946855 1 resource_quota_controller.go:413] failed to discover resources: Unauthorized I1227 20:08:04.628213 1 cleaner.go:172] Cleaning CSR "csr-vp4kl" as it is more than 1h0m0s old and approved. E1228 09:06:57.177885 1 resource_quota_controller.go:413] failed to discover resources: Unauthorized W1228 09:06:57.186128 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized * * ==> kube-proxy [2b02efeb2bc3] <== * I1227 14:12:28.098469 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 I1227 14:12:28.101523 1 server_others.go:140] Detected node IP 192.168.49.2 W1227 14:12:28.101772 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy I1227 14:12:28.257313 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary I1227 14:12:28.257410 1 server_others.go:212] Using iptables Proxier. I1227 14:12:28.257431 1 server_others.go:219] creating dualStackProxier for iptables. W1227 14:12:28.257460 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6 I1227 14:12:28.262285 1 server.go:649] Version: v1.22.3 I1227 14:12:28.265762 1 config.go:315] Starting service config controller I1227 14:12:28.265790 1 shared_informer.go:240] Waiting for caches to sync for service config I1227 14:12:28.265845 1 config.go:224] Starting endpoint slice config controller I1227 14:12:28.265853 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1227 14:12:28.366101 1 shared_informer.go:247] Caches are synced for endpoint slice config I1227 14:12:28.366101 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [1dff91e080d1] <== * I1227 14:12:08.010657 1 serving.go:347] Generated self-signed cert in-memory W1227 14:12:10.780082 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1227 14:12:10.780166 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1227 14:12:10.780499 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W1227 14:12:10.780565 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1227 14:12:10.793380 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I1227 14:12:10.793419 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1227 14:12:10.793892 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I1227 14:12:10.793924 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E1227 14:12:10.795420 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1227 14:12:10.795426 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1227 14:12:10.795781 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1227 14:12:10.796272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1227 14:12:10.796296 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1227 14:12:10.796287 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1227 14:12:10.796381 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E1227 14:12:10.796409 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1227 14:12:10.796424 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E1227 14:12:10.796518 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1227 14:12:10.796600 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1227 14:12:10.796677 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1227 14:12:10.796733 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1227 14:12:10.796812 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1227 14:12:10.796985 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1227 14:12:11.768249 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1227 14:12:11.803691 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1227 14:12:11.847142 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1227 14:12:11.967331 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1227 14:12:12.016633 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1227 14:12:12.074732 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1227 14:12:12.088734 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I1227 14:12:13.693859 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2021-12-27 14:11:51 UTC, end at Tue 2021-12-28 11:04:19 UTC. -- Dec 27 14:31:24 minikube kubelet[2177]: I1227 14:31:24.185788 2177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7021a68087b177e5a3dcd133770193249e8929c31309553056e05eb1ef8068c5" Dec 27 14:31:24 minikube kubelet[2177]: I1227 14:31:24.186185 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-bs4jv through plugin: invalid network status for" Dec 27 14:31:24 minikube kubelet[2177]: I1227 14:31:24.187949 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-s4vkn through plugin: invalid network status for" Dec 27 14:31:24 minikube kubelet[2177]: I1227 14:31:24.188564 2177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2444ba91c2b4c92afd1abb36b04653d7cd6e6c27cab63be4376327c8410c58cb" Dec 27 14:31:24 minikube kubelet[2177]: E1227 14:31:24.878950 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:24 minikube kubelet[2177]: E1227 14:31:24.879147 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert podName:2c7ebaf6-6251-47ee-9ffc-22853e27145a nodeName:}" failed. No retries permitted until 2021-12-27 14:31:26.879099615 +0000 UTC m=+1153.183654432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert") pod "ingress-nginx-controller-54bfb9bb-7d5rq" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a") : secret "ingress-nginx-admission" not found Dec 27 14:31:25 minikube kubelet[2177]: I1227 14:31:25.205476 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-bs4jv through plugin: invalid network status for" Dec 27 14:31:25 minikube kubelet[2177]: I1227 14:31:25.212160 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-s4vkn through plugin: invalid network status for" Dec 27 14:31:26 minikube kubelet[2177]: E1227 14:31:26.896398 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:26 minikube kubelet[2177]: E1227 14:31:26.896770 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert podName:2c7ebaf6-6251-47ee-9ffc-22853e27145a nodeName:}" failed. No retries permitted until 2021-12-27 14:31:30.896609569 +0000 UTC m=+1157.201164381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert") pod "ingress-nginx-controller-54bfb9bb-7d5rq" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a") : secret "ingress-nginx-admission" not found Dec 27 14:31:30 minikube kubelet[2177]: E1227 14:31:30.930562 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:30 minikube kubelet[2177]: E1227 14:31:30.930821 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert podName:2c7ebaf6-6251-47ee-9ffc-22853e27145a nodeName:}" failed. No retries permitted until 2021-12-27 14:31:38.930764982 +0000 UTC m=+1165.235319768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert") pod "ingress-nginx-controller-54bfb9bb-7d5rq" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a") : secret "ingress-nginx-admission" not found Dec 27 14:31:38 minikube kubelet[2177]: E1227 14:31:38.999204 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:39 minikube kubelet[2177]: E1227 14:31:38.999464 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert podName:2c7ebaf6-6251-47ee-9ffc-22853e27145a nodeName:}" failed. No retries permitted until 2021-12-27 14:31:54.999406701 +0000 UTC m=+1181.303961482 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert") pod "ingress-nginx-controller-54bfb9bb-7d5rq" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a") : secret "ingress-nginx-admission" not found Dec 27 14:31:49 minikube kubelet[2177]: I1227 14:31:49.390884 2177 topology_manager.go:200] "Topology Admit Handler" Dec 27 14:31:49 minikube kubelet[2177]: I1227 14:31:49.492309 2177 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpc7\" (UniqueName: \"kubernetes.io/projected/70163996-8629-4fa3-b553-2402efeeeafd-kube-api-access-dkpc7\") pod \"ingress-nginx-controller-5f66978484-4jnjs\" (UID: \"70163996-8629-4fa3-b553-2402efeeeafd\") " Dec 27 14:31:49 minikube kubelet[2177]: I1227 14:31:49.492564 2177 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert\") pod \"ingress-nginx-controller-5f66978484-4jnjs\" (UID: \"70163996-8629-4fa3-b553-2402efeeeafd\") " Dec 27 14:31:49 minikube kubelet[2177]: E1227 14:31:49.594477 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:49 minikube kubelet[2177]: E1227 14:31:49.594716 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert podName:70163996-8629-4fa3-b553-2402efeeeafd nodeName:}" failed. No retries permitted until 2021-12-27 14:31:50.094627628 +0000 UTC m=+1176.399182464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert") pod "ingress-nginx-controller-5f66978484-4jnjs" (UID: "70163996-8629-4fa3-b553-2402efeeeafd") : secret "ingress-nginx-admission" not found Dec 27 14:31:50 minikube kubelet[2177]: E1227 14:31:50.098418 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:50 minikube kubelet[2177]: E1227 14:31:50.098606 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert podName:70163996-8629-4fa3-b553-2402efeeeafd nodeName:}" failed. No retries permitted until 2021-12-27 14:31:51.098554172 +0000 UTC m=+1177.403108963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert") pod "ingress-nginx-controller-5f66978484-4jnjs" (UID: "70163996-8629-4fa3-b553-2402efeeeafd") : secret "ingress-nginx-admission" not found Dec 27 14:31:51 minikube kubelet[2177]: E1227 14:31:51.107245 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:51 minikube kubelet[2177]: E1227 14:31:51.107534 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert podName:70163996-8629-4fa3-b553-2402efeeeafd nodeName:}" failed. No retries permitted until 2021-12-27 14:31:53.107450719 +0000 UTC m=+1179.412005555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert") pod "ingress-nginx-controller-5f66978484-4jnjs" (UID: "70163996-8629-4fa3-b553-2402efeeeafd") : secret "ingress-nginx-admission" not found Dec 27 14:31:53 minikube kubelet[2177]: E1227 14:31:53.124701 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:53 minikube kubelet[2177]: E1227 14:31:53.124968 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert podName:70163996-8629-4fa3-b553-2402efeeeafd nodeName:}" failed. No retries permitted until 2021-12-27 14:31:57.124916769 +0000 UTC m=+1183.429471546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert") pod "ingress-nginx-controller-5f66978484-4jnjs" (UID: "70163996-8629-4fa3-b553-2402efeeeafd") : secret "ingress-nginx-admission" not found Dec 27 14:31:55 minikube kubelet[2177]: E1227 14:31:55.042553 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:55 minikube kubelet[2177]: E1227 14:31:55.042833 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert podName:2c7ebaf6-6251-47ee-9ffc-22853e27145a nodeName:}" failed. No retries permitted until 2021-12-27 14:32:27.04275028 +0000 UTC m=+1213.347305084 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert") pod "ingress-nginx-controller-54bfb9bb-7d5rq" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a") : secret "ingress-nginx-admission" not found Dec 27 14:31:57 minikube kubelet[2177]: E1227 14:31:57.162402 2177 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found Dec 27 14:31:57 minikube kubelet[2177]: E1227 14:31:57.162666 2177 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert podName:70163996-8629-4fa3-b553-2402efeeeafd nodeName:}" failed. No retries permitted until 2021-12-27 14:32:05.162573874 +0000 UTC m=+1191.467128679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70163996-8629-4fa3-b553-2402efeeeafd-webhook-cert") pod "ingress-nginx-controller-5f66978484-4jnjs" (UID: "70163996-8629-4fa3-b553-2402efeeeafd") : secret "ingress-nginx-admission" not found Dec 27 14:31:59 minikube kubelet[2177]: I1227 14:31:59.595289 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-s4vkn through plugin: invalid network status for" Dec 27 14:31:59 minikube kubelet[2177]: I1227 14:31:59.614323 2177 scope.go:110] "RemoveContainer" containerID="a837b5da2a9fce0842f9a5f13950d9ce7a5b4f04380cdf37184fedb4f327ac3f" Dec 27 14:32:00 minikube kubelet[2177]: I1227 14:32:00.637643 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-bs4jv through plugin: invalid network status for" Dec 27 14:32:00 minikube kubelet[2177]: I1227 14:32:00.647173 2177 scope.go:110] "RemoveContainer" containerID="017e9e185481be7264966482265d2a12ac95ae4b30bc500fd00e0033a294efa0" Dec 27 14:32:00 minikube kubelet[2177]: I1227 14:32:00.660102 2177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2444ba91c2b4c92afd1abb36b04653d7cd6e6c27cab63be4376327c8410c58cb" Dec 27 14:32:01 minikube kubelet[2177]: I1227 14:32:01.671224 2177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7021a68087b177e5a3dcd133770193249e8929c31309553056e05eb1ef8068c5" Dec 27 14:32:01 minikube kubelet[2177]: I1227 14:32:01.801769 2177 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlscn\" (UniqueName: \"kubernetes.io/projected/0671cf29-876d-413e-84f3-cc44dca3e975-kube-api-access-nlscn\") pod \"0671cf29-876d-413e-84f3-cc44dca3e975\" (UID: \"0671cf29-876d-413e-84f3-cc44dca3e975\") " Dec 27 14:32:01 minikube kubelet[2177]: I1227 14:32:01.808450 2177 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0671cf29-876d-413e-84f3-cc44dca3e975-kube-api-access-nlscn" (OuterVolumeSpecName: "kube-api-access-nlscn") pod "0671cf29-876d-413e-84f3-cc44dca3e975" (UID: "0671cf29-876d-413e-84f3-cc44dca3e975"). InnerVolumeSpecName "kube-api-access-nlscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 14:32:01 minikube kubelet[2177]: I1227 14:32:01.902488 2177 reconciler.go:319] "Volume detached for volume \"kube-api-access-nlscn\" (UniqueName: \"kubernetes.io/projected/0671cf29-876d-413e-84f3-cc44dca3e975-kube-api-access-nlscn\") on node \"minikube\" DevicePath \"\"" Dec 27 14:32:02 minikube kubelet[2177]: I1227 14:32:02.811391 2177 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clndw\" (UniqueName: \"kubernetes.io/projected/c9161ba8-ae31-4eb7-86f9-25cad85ae10a-kube-api-access-clndw\") pod \"c9161ba8-ae31-4eb7-86f9-25cad85ae10a\" (UID: \"c9161ba8-ae31-4eb7-86f9-25cad85ae10a\") " Dec 27 14:32:02 minikube kubelet[2177]: I1227 14:32:02.817980 2177 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9161ba8-ae31-4eb7-86f9-25cad85ae10a-kube-api-access-clndw" (OuterVolumeSpecName: "kube-api-access-clndw") pod "c9161ba8-ae31-4eb7-86f9-25cad85ae10a" (UID: "c9161ba8-ae31-4eb7-86f9-25cad85ae10a"). InnerVolumeSpecName "kube-api-access-clndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 14:32:02 minikube kubelet[2177]: I1227 14:32:02.912817 2177 reconciler.go:319] "Volume detached for volume \"kube-api-access-clndw\" (UniqueName: \"kubernetes.io/projected/c9161ba8-ae31-4eb7-86f9-25cad85ae10a-kube-api-access-clndw\") on node \"minikube\" DevicePath \"\"" Dec 27 14:32:05 minikube kubelet[2177]: I1227 14:32:05.862844 2177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0856cd59daf7b19f1a23f9bea63386764182d581879e4d9afa3a7d6806253a05" Dec 27 14:32:05 minikube kubelet[2177]: I1227 14:32:05.863008 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-4jnjs through plugin: invalid network status for" Dec 27 14:32:06 minikube kubelet[2177]: I1227 14:32:06.880393 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-4jnjs through plugin: invalid network status for" Dec 27 14:32:27 minikube kubelet[2177]: I1227 14:32:27.949726 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-54bfb9bb-7d5rq through plugin: invalid network status for" Dec 27 14:32:28 minikube kubelet[2177]: I1227 14:32:28.163709 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-54bfb9bb-7d5rq through plugin: invalid network status for" Dec 27 14:36:55 minikube kubelet[2177]: I1227 14:36:55.660400 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-4jnjs through plugin: invalid network status for" Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.110222 2177 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-54bfb9bb-7d5rq through plugin: invalid network status for" Dec 27 14:38:25 minikube kubelet[2177]: E1227 14:38:25.171214 2177 kuberuntime_container.go:589] "PreStop hook failed" err="command '/wait-shutdown' exited with 137: " pod="ingress-nginx/ingress-nginx-controller-54bfb9bb-7d5rq" podUID=2c7ebaf6-6251-47ee-9ffc-22853e27145a containerName="controller" containerID="docker://d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.669616 2177 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jzs\" (UniqueName: \"kubernetes.io/projected/2c7ebaf6-6251-47ee-9ffc-22853e27145a-kube-api-access-q8jzs\") pod \"2c7ebaf6-6251-47ee-9ffc-22853e27145a\" (UID: \"2c7ebaf6-6251-47ee-9ffc-22853e27145a\") " Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.669714 2177 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert\") pod \"2c7ebaf6-6251-47ee-9ffc-22853e27145a\" (UID: \"2c7ebaf6-6251-47ee-9ffc-22853e27145a\") " Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.672715 2177 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2c7ebaf6-6251-47ee-9ffc-22853e27145a" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.672723 2177 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7ebaf6-6251-47ee-9ffc-22853e27145a-kube-api-access-q8jzs" (OuterVolumeSpecName: "kube-api-access-q8jzs") pod "2c7ebaf6-6251-47ee-9ffc-22853e27145a" (UID: "2c7ebaf6-6251-47ee-9ffc-22853e27145a"). InnerVolumeSpecName "kube-api-access-q8jzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.770084 2177 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c7ebaf6-6251-47ee-9ffc-22853e27145a-webhook-cert\") on node \"minikube\" DevicePath \"\"" Dec 27 14:38:25 minikube kubelet[2177]: I1227 14:38:25.770204 2177 reconciler.go:319] "Volume detached for volume \"kube-api-access-q8jzs\" (UniqueName: \"kubernetes.io/projected/2c7ebaf6-6251-47ee-9ffc-22853e27145a-kube-api-access-q8jzs\") on node \"minikube\" DevicePath \"\"" Dec 27 14:38:26 minikube kubelet[2177]: I1227 14:38:26.146778 2177 scope.go:110] "RemoveContainer" containerID="d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" Dec 27 14:38:26 minikube kubelet[2177]: I1227 14:38:26.162271 2177 scope.go:110] "RemoveContainer" containerID="d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" Dec 27 14:38:26 minikube kubelet[2177]: E1227 14:38:26.162991 2177 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" containerID="d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" Dec 27 14:38:26 minikube kubelet[2177]: I1227 14:38:26.163049 2177 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee} err="failed to get container status \"d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee\": rpc error: code = Unknown desc = Error: No such container: d729941638cefd626d6f3d02d9dc72b28f486ce0784b5d95d6e096597aa976ee" Dec 27 14:38:28 minikube kubelet[2177]: I1227 14:38:28.165459 2177 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2c7ebaf6-6251-47ee-9ffc-22853e27145a path="/var/lib/kubelet/pods/2c7ebaf6-6251-47ee-9ffc-22853e27145a/volumes" * * ==> storage-provisioner [7c7b7ab07d0e] <== * I1227 14:12:28.713979 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I1227 14:12:28.726350 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I1227 14:12:28.726405 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1227 14:12:28.740951 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1227 14:12:28.741063 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1fc245cb-477f-439f-8f37-6d857dba1c83", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_843a51ed-f855-40a1-83e3-02437cf417ce became leader I1227 14:12:28.741105 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_843a51ed-f855-40a1-83e3-02437cf417ce! I1227 14:12:28.841247 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_843a51ed-f855-40a1-83e3-02437cf417ce!