* * ==> Audit <== * |---------|-----------------------------------------------|----------|-------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-----------------------------------------------|----------|-------|---------|---------------------|---------------------| | start | | minikube | steve | v1.29.0 | 10 Mar 23 09:55 GMT | 10 Mar 23 09:56 GMT | | mount | /home/steve/dev/k/localstack/ready.d:/ready.d | minikube | steve | v1.29.0 | 10 Mar 23 09:57 GMT | | | mount | /home/steve/dev/k/localstack/ready.d:/data | minikube | steve | v1.29.0 | 10 Mar 23 09:57 GMT | | |---------|-----------------------------------------------|----------|-------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/03/10 09:55:25 Running on machine: fedora Binary: Built with gc go1.19.5 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0310 09:55:25.034051 3269 out.go:296] Setting OutFile to fd 1 ... I0310 09:55:25.034210 3269 out.go:348] isatty.IsTerminal(1) = true I0310 09:55:25.034213 3269 out.go:309] Setting ErrFile to fd 2... I0310 09:55:25.034216 3269 out.go:348] isatty.IsTerminal(2) = true I0310 09:55:25.034294 3269 root.go:334] Updating PATH: /home/steve/.minikube/bin I0310 09:55:25.035225 3269 out.go:303] Setting JSON to false I0310 09:55:25.043254 3269 start.go:125] hostinfo: {"hostname":"fedora","uptime":143,"bootTime":1678441983,"procs":315,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"37","kernelVersion":"6.1.14-200.fc37.x86_64","kernelArch":"x86_64","virtualizationSystem":"hyperv","virtualizationRole":"guest","hostId":"bd3fb16d-4d86-48a9-9345-991f214067bd"} I0310 09:55:25.043354 3269 start.go:135] virtualization: hyperv guest I0310 09:55:25.050296 3269 out.go:177] 😄 minikube v1.29.0 on Fedora 37 (hyperv/amd64) W0310 09:55:25.055439 3269 preload.go:295] Failed to list preload files: open /home/steve/.minikube/cache/preloaded-tarball: no such file or directory I0310 09:55:25.055484 3269 driver.go:365] Setting default libvirt URI to qemu:///system I0310 09:55:25.055499 3269 global.go:111] Querying for installed drivers using PATH=/home/steve/.minikube/bin:/opt/pact/bin:/opt/clearswift/platform/dev/bin/:/usr/local/go/bin:/home/steve/go/bin:/home/steve/bin:/home/steve/.local/bin:/usr/local/lib/nodejs/node-v18.6.0-linux-x64/bin:/home/steve/.cargo/bin:/sbin:/bin:/usr/bin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/home/steve/.local/share/JetBrains/Toolbox/scripts:/home/steve/.local/share/JetBrains/Toolbox/scripts:/home/steve/.local/share/JetBrains/Toolbox/scripts:/home/steve/.local/share/JetBrains/Toolbox/scripts I0310 09:55:25.055535 3269 notify.go:220] Checking for updates... I0310 09:55:25.055559 3269 global.go:122] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0310 09:55:25.055634 3269 global.go:122] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0310 09:55:25.292068 3269 docker.go:141] docker version: linux-23.0.1:Docker Engine - Community I0310 09:55:25.292158 3269 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0310 09:55:25.398063 3269 info.go:266] docker info: {ID:CCCC:NQKR:YDEH:R2KG:W6NP:D4PZ:RZX4:MWVR:PFMI:RZZX:MO5M:SZAN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:801 Driver:btrfs DriverStatus:[[Btrfs ]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:false NGoroutines:34 SystemTime:2023-03-10 09:55:25.38763822 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.1.14-200.fc37.x86_64 OperatingSystem:Fedora Linux 37 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://devnexus.engineering.clearswift.org:8091/] Secure:true Official:true}} Mirrors:[https://devnexus.engineering.clearswift.org:8091/]} NCPU:4 MemTotal:33658281984 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:fedora Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:}} I0310 09:55:25.398166 3269 docker.go:282] overlay module found I0310 09:55:25.398175 3269 global.go:122] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.401714 3269 global.go:122] kvm2 default: true priority: 8, state: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:libvirt group membership check failed: user is not a member of the appropriate libvirt group Reason:PR_KVM_USER_PERMISSION Fix:Check that libvirtd is properly installed and that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!) Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0310 09:55:31.434259 3269 global.go:122] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.698145 3269 podman.go:123] podman version: 4.4.2 I0310 09:55:31.698163 3269 global.go:122] podman default: true priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.700014 3269 global.go:122] qemu2 default: true priority: 7, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.700025 3269 global.go:122] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.700042 3269 driver.go:300] not recommending "none" due to default: false I0310 09:55:31.700046 3269 driver.go:300] not recommending "ssh" due to default: false I0310 09:55:31.700048 3269 driver.go:295] not recommending "kvm2" due to health: libvirt group membership check failed: user is not a member of the appropriate libvirt group I0310 09:55:31.700061 3269 driver.go:335] Picked: docker I0310 09:55:31.700074 3269 driver.go:336] Alternatives: [podman qemu2 none ssh] I0310 09:55:31.700076 3269 driver.go:337] Rejects: [virtualbox vmware kvm2] I0310 09:55:31.714618 3269 out.go:177] ✨ Automatically selected the docker driver. Other choices: podman, qemu2, none, ssh I0310 09:55:31.719741 3269 start.go:296] selected driver: docker I0310 09:55:31.719754 3269 start.go:857] validating driver "docker" against I0310 09:55:31.719762 3269 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0310 09:55:31.719906 3269 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0310 09:55:31.826780 3269 info.go:266] docker info: {ID:CCCC:NQKR:YDEH:R2KG:W6NP:D4PZ:RZX4:MWVR:PFMI:RZZX:MO5M:SZAN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:801 Driver:btrfs DriverStatus:[[Btrfs ]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:false NGoroutines:34 SystemTime:2023-03-10 09:55:31.817644471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.1.14-200.fc37.x86_64 OperatingSystem:Fedora Linux 37 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://devnexus.engineering.clearswift.org:8091/] Secure:true Official:true}} Mirrors:[https://devnexus.engineering.clearswift.org:8091/]} NCPU:4 MemTotal:33658281984 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:fedora Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:}} W0310 09:55:31.826910 3269 out.go:239] ❗ docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance I0310 09:55:31.826963 3269 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0310 09:55:31.827261 3269 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32099MB, container=32099MB I0310 09:55:31.827380 3269 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true] I0310 09:55:31.837662 3269 out.go:177] 📌 Using Docker driver with root privileges I0310 09:55:31.842281 3269 cni.go:84] Creating CNI manager for "" I0310 09:55:31.842290 3269 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0310 09:55:31.842300 3269 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0310 09:55:31.842311 3269 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:localStorageCapacityIsolation Value:false}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/steve:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0310 09:55:31.851188 3269 out.go:177] 👍 Starting control plane node minikube in cluster minikube I0310 09:55:31.855852 3269 cache.go:120] Beginning downloading kic base image for docker with docker I0310 09:55:31.870515 3269 out.go:177] 🚜 Pulling base image ... I0310 09:55:31.876562 3269 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon I0310 09:55:31.876591 3269 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker I0310 09:55:31.876902 3269 profile.go:148] Saving config to /home/steve/.minikube/profiles/minikube/config.json ... I0310 09:55:31.876921 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/config.json: {Name:mkd278d610601856883c65aa1c5c41ca8467e3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:31.876901 3269 cache.go:107] acquiring lock: {Name:mkf0716cef818ec27961473c6985abc0a9455f46 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.876916 3269 cache.go:107] acquiring lock: {Name:mkd3c14cf7ecf5c906b3e2c44568a3472093fdb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.876951 3269 cache.go:107] acquiring lock: {Name:mk9fee08c9f85b38ca263be3e763a77871aed1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.876954 3269 cache.go:107] acquiring lock: {Name:mk0def1ab29871b5fd29e7bf631f38b0c45db652 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.877219 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 exists I0310 09:55:31.877222 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 exists I0310 09:55:31.877222 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 exists I0310 09:55:31.877225 3269 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.26.1" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1" took 274.897µs I0310 09:55:31.877231 3269 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.26.1 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 succeeded I0310 09:55:31.877229 3269 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.26.1" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1" took 275.697µs I0310 09:55:31.877237 3269 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.26.1 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 succeeded I0310 09:55:31.877229 3269 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.26.1" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1" took 337.196µs I0310 09:55:31.877243 3269 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.26.1 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 succeeded I0310 09:55:31.877244 3269 cache.go:107] acquiring lock: {Name:mkb5bdade6c5e69a7d25e363a5bd9a1b56fbe430 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.877251 3269 cache.go:107] acquiring lock: {Name:mk4bc0ca43faa12fe23d676afac47ead7ce7ee42 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.877282 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists I0310 09:55:31.877285 3269 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 43.4µs I0310 09:55:31.877287 3269 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded I0310 09:55:31.877291 3269 cache.go:107] acquiring lock: {Name:mk3c99aeb0f739fc873df43f41507d16fdfe6241 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.877296 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 exists I0310 09:55:31.877300 3269 cache.go:96] cache image "registry.k8s.io/etcd:3.5.6-0" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0" took 52.1µs I0310 09:55:31.877304 3269 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.6-0 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 succeeded I0310 09:55:31.877315 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 exists I0310 09:55:31.877310 3269 cache.go:107] acquiring lock: {Name:mkfc4e4f2fe979279e50f3494171a9408354aed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:31.877319 3269 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.26.1" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1" took 27.999µs I0310 09:55:31.877323 3269 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.26.1 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 succeeded I0310 09:55:31.877354 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists I0310 09:55:31.877355 3269 cache.go:115] /home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0310 09:55:31.877358 3269 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 49.1µs I0310 09:55:31.877359 3269 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 448.895µs I0310 09:55:31.877362 3269 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded I0310 09:55:31.877364 3269 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0310 09:55:31.877368 3269 cache.go:87] Successfully saved all images to host disk. I0310 09:55:32.063209 3269 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull I0310 09:55:32.063219 3269 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load I0310 09:55:32.063227 3269 cache.go:193] Successfully downloaded all kic artifacts I0310 09:55:32.063247 3269 start.go:364] acquiring machines lock for minikube: {Name:mkaff75e6217a6a54208bb461087101c7db2cd27 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0310 09:55:32.063356 3269 start.go:368] acquired machines lock for "minikube" in 98.599µs I0310 09:55:32.063382 3269 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:localStorageCapacityIsolation Value:false}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/steve:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0310 09:55:32.063434 3269 start.go:125] createHost starting for "" (driver="docker") I0310 09:55:32.080631 3269 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=8000MB) ... I0310 09:55:32.080867 3269 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0310 09:55:32.080882 3269 client.go:168] LocalClient.Create starting I0310 09:55:32.081741 3269 main.go:141] libmachine: Reading certificate data from /home/steve/.minikube/certs/ca.pem I0310 09:55:32.081798 3269 main.go:141] libmachine: Decoding PEM data... I0310 09:55:32.081808 3269 main.go:141] libmachine: Parsing certificate... I0310 09:55:32.081865 3269 main.go:141] libmachine: Reading certificate data from /home/steve/.minikube/certs/cert.pem I0310 09:55:32.081881 3269 main.go:141] libmachine: Decoding PEM data... I0310 09:55:32.081889 3269 main.go:141] libmachine: Parsing certificate... I0310 09:55:32.082211 3269 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0310 09:55:32.149344 3269 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0310 09:55:32.149459 3269 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0310 09:55:32.149470 3269 cli_runner.go:164] Run: docker network inspect minikube W0310 09:55:32.197855 3269 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0310 09:55:32.197886 3269 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0310 09:55:32.197892 3269 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0310 09:55:32.197953 3269 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0310 09:55:32.259620 3269 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001031bd0} I0310 09:55:32.259658 3269 network_create.go:123] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0310 09:55:32.259729 3269 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0310 09:55:32.546802 3269 network_create.go:107] docker network minikube 192.168.49.0/24 created I0310 09:55:32.546821 3269 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0310 09:55:32.547207 3269 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0310 09:55:32.604334 3269 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0310 09:55:32.674934 3269 oci.go:103] Successfully created a docker volume minikube I0310 09:55:32.675060 3269 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib I0310 09:55:33.857539 3269 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib: (1.1824367s) I0310 09:55:33.857568 3269 oci.go:107] Successfully prepared a docker volume minikube I0310 09:55:33.857605 3269 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker W0310 09:55:33.857966 3269 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0310 09:55:33.858008 3269 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0310 09:55:33.858490 3269 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0310 09:55:33.969452 3269 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 I0310 09:55:34.979634 3269 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15: (1.010135536s) I0310 09:55:34.979819 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0310 09:55:35.059537 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:55:35.115910 3269 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0310 09:55:35.241753 3269 oci.go:144] the created container "minikube" has a running status. I0310 09:55:35.241767 3269 kic.go:221] Creating ssh key for kic: /home/steve/.minikube/machines/minikube/id_rsa... I0310 09:55:35.481690 3269 kic_runner.go:191] docker (temp): /home/steve/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0310 09:55:35.660161 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:55:35.728093 3269 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0310 09:55:35.728113 3269 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0310 09:55:35.823309 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:55:35.898399 3269 machine.go:88] provisioning docker machine ... I0310 09:55:35.898443 3269 ubuntu.go:169] provisioning hostname "minikube" I0310 09:55:35.898521 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:35.955597 3269 main.go:141] libmachine: Using SSH client type: native I0310 09:55:35.955733 3269 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x7f1980] 0x7f4b00 [] 0s} 127.0.0.1 32772 } I0310 09:55:35.955739 3269 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0310 09:55:36.104405 3269 main.go:141] libmachine: SSH cmd err, output: : minikube I0310 09:55:36.104473 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:36.158140 3269 main.go:141] libmachine: Using SSH client type: native I0310 09:55:36.158380 3269 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x7f1980] 0x7f4b00 [] 0s} 127.0.0.1 32772 } I0310 09:55:36.158441 3269 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0310 09:55:36.283860 3269 main.go:141] libmachine: SSH cmd err, output: : I0310 09:55:36.283878 3269 ubuntu.go:175] set auth options {CertDir:/home/steve/.minikube CaCertPath:/home/steve/.minikube/certs/ca.pem CaPrivateKeyPath:/home/steve/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/steve/.minikube/machines/server.pem ServerKeyPath:/home/steve/.minikube/machines/server-key.pem ClientKeyPath:/home/steve/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/steve/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/steve/.minikube} I0310 09:55:36.283900 3269 ubuntu.go:177] setting up certificates I0310 09:55:36.283907 3269 provision.go:83] configureAuth start I0310 09:55:36.283981 3269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0310 09:55:36.349642 3269 provision.go:138] copyHostCerts I0310 09:55:36.349710 3269 exec_runner.go:144] found /home/steve/.minikube/ca.pem, removing ... I0310 09:55:36.349714 3269 exec_runner.go:207] rm: /home/steve/.minikube/ca.pem I0310 09:55:36.349811 3269 exec_runner.go:151] cp: /home/steve/.minikube/certs/ca.pem --> /home/steve/.minikube/ca.pem (1074 bytes) I0310 09:55:36.349885 3269 exec_runner.go:144] found /home/steve/.minikube/cert.pem, removing ... I0310 09:55:36.349888 3269 exec_runner.go:207] rm: /home/steve/.minikube/cert.pem I0310 09:55:36.349904 3269 exec_runner.go:151] cp: /home/steve/.minikube/certs/cert.pem --> /home/steve/.minikube/cert.pem (1119 bytes) I0310 09:55:36.349945 3269 exec_runner.go:144] found /home/steve/.minikube/key.pem, removing ... I0310 09:55:36.349947 3269 exec_runner.go:207] rm: /home/steve/.minikube/key.pem I0310 09:55:36.349961 3269 exec_runner.go:151] cp: /home/steve/.minikube/certs/key.pem --> /home/steve/.minikube/key.pem (1679 bytes) I0310 09:55:36.350007 3269 provision.go:112] generating server cert: /home/steve/.minikube/machines/server.pem ca-key=/home/steve/.minikube/certs/ca.pem private-key=/home/steve/.minikube/certs/ca-key.pem org=steve.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0310 09:55:36.440388 3269 provision.go:172] copyRemoteCerts I0310 09:55:36.440571 3269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0310 09:55:36.440620 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:36.494321 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:55:36.594203 3269 ssh_runner.go:362] scp /home/steve/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0310 09:55:36.661052 3269 ssh_runner.go:362] scp /home/steve/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0310 09:55:36.698917 3269 ssh_runner.go:362] scp /home/steve/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0310 09:55:36.744524 3269 provision.go:86] duration metric: configureAuth took 460.600091ms I0310 09:55:36.744547 3269 ubuntu.go:193] setting minikube options for container-runtime I0310 09:55:36.744835 3269 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1 I0310 09:55:36.744929 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:36.827119 3269 main.go:141] libmachine: Using SSH client type: native I0310 09:55:36.827260 3269 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x7f1980] 0x7f4b00 [] 0s} 127.0.0.1 32772 } I0310 09:55:36.827268 3269 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0310 09:55:36.979877 3269 main.go:141] libmachine: SSH cmd err, output: : btrfs I0310 09:55:36.979904 3269 ubuntu.go:71] root file system type: btrfs I0310 09:55:36.980381 3269 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0310 09:55:36.980573 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:37.034616 3269 main.go:141] libmachine: Using SSH client type: native I0310 09:55:37.034878 3269 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x7f1980] 0x7f4b00 [] 0s} 127.0.0.1 32772 } I0310 09:55:37.035109 3269 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0310 09:55:37.176920 3269 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0310 09:55:37.177046 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:37.229582 3269 main.go:141] libmachine: Using SSH client type: native I0310 09:55:37.229690 3269 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x7f1980] 0x7f4b00 [] 0s} 127.0.0.1 32772 } I0310 09:55:37.229700 3269 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0310 09:55:38.113523 3269 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-01-19 17:34:14.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-03-10 09:55:37.174250690 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0310 09:55:38.113538 3269 machine.go:91] provisioned docker machine in 2.215129395s I0310 09:55:38.113543 3269 client.go:171] LocalClient.Create took 6.032658615s I0310 09:55:38.113550 3269 start.go:167] duration metric: libmachine.API.Create for "minikube" took 6.032700415s I0310 09:55:38.113553 3269 start.go:300] post-start starting for "minikube" (driver="docker") I0310 09:55:38.113563 3269 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0310 09:55:38.113693 3269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0310 09:55:38.113727 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:38.183761 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:55:38.272149 3269 ssh_runner.go:195] Run: cat /etc/os-release I0310 09:55:38.275169 3269 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0310 09:55:38.275177 3269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0310 09:55:38.275184 3269 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0310 09:55:38.275196 3269 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0310 09:55:38.275202 3269 filesync.go:126] Scanning /home/steve/.minikube/addons for local assets ... I0310 09:55:38.275237 3269 filesync.go:126] Scanning /home/steve/.minikube/files for local assets ... I0310 09:55:38.275246 3269 start.go:303] post-start completed in 161.683677ms I0310 09:55:38.275518 3269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0310 09:55:38.337468 3269 profile.go:148] Saving config to /home/steve/.minikube/profiles/minikube/config.json ... I0310 09:55:38.338161 3269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0310 09:55:38.338233 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:38.399596 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:55:38.489155 3269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0310 09:55:38.492797 3269 start.go:128] duration metric: createHost completed in 6.429353387s I0310 09:55:38.492806 3269 start.go:83] releasing machines lock for "minikube", held for 6.429445086s I0310 09:55:38.492864 3269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0310 09:55:38.557680 3269 ssh_runner.go:195] Run: cat /version.json I0310 09:55:38.557719 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:38.557825 3269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0310 09:55:38.557866 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:55:38.619267 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:55:38.623531 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:55:38.704480 3269 ssh_runner.go:195] Run: systemctl --version I0310 09:55:38.994018 3269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0310 09:55:39.000380 3269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0310 09:55:39.028005 3269 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0310 09:55:39.028201 3269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0310 09:55:39.036847 3269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes) I0310 09:55:39.052701 3269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0310 09:55:39.069673 3269 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0310 09:55:39.069686 3269 start.go:483] detecting cgroup driver to use... I0310 09:55:39.069780 3269 detect.go:199] detected "systemd" cgroup driver on host os I0310 09:55:39.069869 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0310 09:55:39.083169 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0310 09:55:39.090592 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0310 09:55:39.097751 3269 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0310 09:55:39.097811 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0310 09:55:39.105913 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0310 09:55:39.113667 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0310 09:55:39.120624 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0310 09:55:39.127634 3269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0310 09:55:39.134270 3269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0310 09:55:39.142107 3269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0310 09:55:39.152231 3269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0310 09:55:39.160286 3269 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0310 09:55:39.224293 3269 ssh_runner.go:195] Run: sudo systemctl restart containerd I0310 09:55:39.346813 3269 start.go:483] detecting cgroup driver to use... I0310 09:55:39.346839 3269 detect.go:199] detected "systemd" cgroup driver on host os I0310 09:55:39.346884 3269 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0310 09:55:39.362762 3269 cruntime.go:273] skipping containerd shutdown because we are bound to it I0310 09:55:39.362814 3269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0310 09:55:39.373337 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0310 09:55:39.399301 3269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0310 09:55:39.499006 3269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0310 09:55:39.578088 3269 docker.go:529] configuring docker to use "systemd" as cgroup driver... I0310 09:55:39.578111 3269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0310 09:55:39.595697 3269 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0310 09:55:39.685989 3269 ssh_runner.go:195] Run: sudo systemctl restart docker I0310 09:55:40.856326 3269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170298829s) I0310 09:55:40.856471 3269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0310 09:55:40.950749 3269 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0310 09:55:41.041577 3269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0310 09:55:41.124034 3269 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0310 09:55:41.230242 3269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0310 09:55:41.259714 3269 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock I0310 09:55:41.259799 3269 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0310 09:55:41.263103 3269 start.go:551] Will wait 60s for crictl version I0310 09:55:41.263164 3269 ssh_runner.go:195] Run: which crictl I0310 09:55:41.266857 3269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0310 09:55:41.509386 3269 start.go:567] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.23 RuntimeApiVersion: v1alpha2 I0310 09:55:41.509597 3269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0310 09:55:41.569525 3269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0310 09:55:41.625451 3269 out.go:204] 🐳 Preparing Kubernetes v1.26.1 on Docker 20.10.23 ... I0310 09:55:41.625705 3269 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0310 09:55:41.693898 3269 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0310 09:55:41.697676 3269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0310 09:55:41.714893 3269 out.go:177] ▪ kubelet.localStorageCapacityIsolation=false I0310 09:55:41.720367 3269 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker I0310 09:55:41.720463 3269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0310 09:55:41.738354 3269 docker.go:630] Got preloaded images: I0310 09:55:41.738363 3269 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded I0310 09:55:41.738368 3269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.26.1 registry.k8s.io/kube-controller-manager:v1.26.1 registry.k8s.io/kube-scheduler:v1.26.1 registry.k8s.io/kube-proxy:v1.26.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5] I0310 09:55:41.739606 3269 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.6-0 I0310 09:55:41.739607 3269 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3 I0310 09:55:41.739642 3269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.26.1 I0310 09:55:41.739722 3269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:55:41.739739 3269 image.go:134] retrieving image: registry.k8s.io/pause:3.9 I0310 09:55:41.739760 3269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.26.1 I0310 09:55:41.740004 3269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.26.1 I0310 09:55:41.740101 3269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.26.1 I0310 09:55:41.740234 3269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.26.1: Error: No such image: registry.k8s.io/kube-controller-manager:v1.26.1 I0310 09:55:41.740359 3269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.6-0: Error: No such image: registry.k8s.io/etcd:3.5.6-0 I0310 09:55:41.740393 3269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.26.1: Error: No such image: registry.k8s.io/kube-scheduler:v1.26.1 I0310 09:55:41.740671 3269 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3 I0310 09:55:41.740815 3269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.26.1: Error: No such image: registry.k8s.io/kube-apiserver:v1.26.1 I0310 09:55:41.740876 3269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:55:41.740903 3269 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error: No such image: registry.k8s.io/pause:3.9 I0310 09:55:41.741187 3269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.26.1: Error: No such image: registry.k8s.io/kube-proxy:v1.26.1 I0310 09:55:43.687739 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:55:43.718159 3269 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime I0310 09:55:43.718201 3269 docker.go:306] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:55:43.718244 3269 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:55:43.744877 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 I0310 09:55:43.745148 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5 I0310 09:55:43.748508 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory I0310 09:55:43.748524 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes) I0310 09:55:43.813160 3269 docker.go:273] Loading image: /var/lib/minikube/images/storage-provisioner_v5 I0310 09:55:43.813175 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load" I0310 09:55:43.880144 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.26.1 I0310 09:55:44.130513 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache I0310 09:55:44.130553 3269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.26.1" needs transfer: "registry.k8s.io/kube-proxy:v1.26.1" does not exist at hash "46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd" in container runtime I0310 09:55:44.130579 3269 docker.go:306] Removing image: registry.k8s.io/kube-proxy:v1.26.1 I0310 09:55:44.130661 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.26.1 I0310 09:55:44.150770 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 I0310 09:55:44.150872 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.26.1 I0310 09:55:44.153896 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.26.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.26.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.26.1': No such file or directory I0310 09:55:44.153914 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 --> /var/lib/minikube/images/kube-proxy_v1.26.1 (21538304 bytes) I0310 09:55:44.256886 3269 docker.go:273] Loading image: /var/lib/minikube/images/kube-proxy_v1.26.1 I0310 09:55:44.256900 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.26.1 | docker load" I0310 09:55:44.522815 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.26.1 I0310 09:55:44.600274 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.6-0 I0310 09:55:44.708768 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.26.1 I0310 09:55:44.717985 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.9.3 I0310 09:55:44.728313 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.26.1 I0310 09:55:44.951977 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 from cache I0310 09:55:44.952012 3269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.26.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.26.1" does not exist at hash "655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f" in container runtime I0310 09:55:44.952032 3269 docker.go:306] Removing image: registry.k8s.io/kube-scheduler:v1.26.1 I0310 09:55:44.952055 3269 cache_images.go:116] "registry.k8s.io/etcd:3.5.6-0" needs transfer: "registry.k8s.io/etcd:3.5.6-0" does not exist at hash "fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7" in container runtime I0310 09:55:44.952079 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.26.1 I0310 09:55:44.952092 3269 docker.go:306] Removing image: registry.k8s.io/etcd:3.5.6-0 I0310 09:55:44.952115 3269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.26.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.26.1" does not exist at hash "e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d" in container runtime I0310 09:55:44.952128 3269 docker.go:306] Removing image: registry.k8s.io/kube-controller-manager:v1.26.1 I0310 09:55:44.952129 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.6-0 I0310 09:55:44.952135 3269 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime I0310 09:55:44.952142 3269 docker.go:306] Removing image: registry.k8s.io/coredns/coredns:v1.9.3 I0310 09:55:44.952151 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.26.1 I0310 09:55:44.952155 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.9.3 I0310 09:55:44.952180 3269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.26.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.26.1" does not exist at hash "deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3" in container runtime I0310 09:55:44.952193 3269 docker.go:306] Removing image: registry.k8s.io/kube-apiserver:v1.26.1 I0310 09:55:44.952220 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.26.1 I0310 09:55:44.982528 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 I0310 09:55:44.982637 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.26.1 I0310 09:55:44.982999 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 I0310 09:55:44.983089 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.26.1 I0310 09:55:44.985483 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 I0310 09:55:44.985509 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 I0310 09:55:44.985579 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.26.1 I0310 09:55:44.985623 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0 I0310 09:55:44.985709 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 I0310 09:55:44.985785 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3 I0310 09:55:44.987889 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.26.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.26.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.26.1': No such file or directory I0310 09:55:44.987905 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 --> /var/lib/minikube/images/kube-apiserver_v1.26.1 (35322880 bytes) I0310 09:55:44.987931 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.26.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.26.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.26.1': No such file or directory I0310 09:55:44.987937 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 --> /var/lib/minikube/images/kube-scheduler_v1.26.1 (17488896 bytes) I0310 09:55:44.989690 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory I0310 09:55:44.989701 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes) I0310 09:55:44.989782 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.6-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.5.6-0': No such file or directory I0310 09:55:44.989795 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 --> /var/lib/minikube/images/etcd_3.5.6-0 (102545408 bytes) I0310 09:55:44.989925 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.26.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.26.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.26.1': No such file or directory I0310 09:55:44.989934 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 --> /var/lib/minikube/images/kube-controller-manager_v1.26.1 (32248832 bytes) I0310 09:55:45.135565 3269 docker.go:273] Loading image: /var/lib/minikube/images/kube-scheduler_v1.26.1 I0310 09:55:45.135609 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.26.1 | docker load" I0310 09:55:45.141205 3269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9 I0310 09:55:46.030619 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 from cache I0310 09:55:46.030631 3269 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime I0310 09:55:46.030642 3269 docker.go:273] Loading image: /var/lib/minikube/images/coredns_v1.9.3 I0310 09:55:46.030651 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.9.3 | docker load" I0310 09:55:46.030660 3269 docker.go:306] Removing image: registry.k8s.io/pause:3.9 I0310 09:55:46.030723 3269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9 I0310 09:55:46.511328 3269 cache_images.go:286] Loading image from: /home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 I0310 09:55:46.511348 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache I0310 09:55:46.511376 3269 docker.go:273] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.26.1 I0310 09:55:46.511383 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.26.1 | docker load" I0310 09:55:46.511431 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9 I0310 09:55:46.514461 3269 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.9': No such file or directory I0310 09:55:46.514479 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes) I0310 09:55:47.248036 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 from cache I0310 09:55:47.248060 3269 docker.go:273] Loading image: /var/lib/minikube/images/kube-apiserver_v1.26.1 I0310 09:55:47.248070 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.26.1 | docker load" I0310 09:55:48.038762 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 from cache I0310 09:55:48.038784 3269 docker.go:273] Loading image: /var/lib/minikube/images/etcd_3.5.6-0 I0310 09:55:48.038794 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load" I0310 09:55:50.541840 3269 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (2.503031427s) I0310 09:55:50.541851 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 from cache I0310 09:55:50.541863 3269 docker.go:273] Loading image: /var/lib/minikube/images/pause_3.9 I0310 09:55:50.541869 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load" I0310 09:55:50.669210 3269 cache_images.go:315] Transferred and loaded /home/steve/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache I0310 09:55:50.669240 3269 cache_images.go:123] Successfully loaded all cached images I0310 09:55:50.669246 3269 cache_images.go:92] LoadImages completed in 8.93086933s I0310 09:55:50.669349 3269 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0310 09:55:50.808103 3269 cni.go:84] Creating CNI manager for "" I0310 09:55:50.808121 3269 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0310 09:55:50.808147 3269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0310 09:55:50.808161 3269 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth localStorageCapacityIsolation:false runtimeRequestTimeout:15m]} I0310 09:55:50.808283 3269 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth localStorageCapacityIsolation: false runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0310 09:55:50.808338 3269 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:localStorageCapacityIsolation Value:false}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0310 09:55:50.808392 3269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1 I0310 09:55:50.817972 3269 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.26.1: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.26.1': No such file or directory Initiating transfer... I0310 09:55:50.818028 3269 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.26.1 I0310 09:55:50.824853 3269 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubectl.sha256 I0310 09:55:50.824934 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubectl I0310 09:55:50.825456 3269 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubelet.sha256 I0310 09:55:50.825498 3269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0310 09:55:50.827570 3269 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.26.1/kubectl': No such file or directory I0310 09:55:50.827580 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/linux/amd64/v1.26.1/kubectl --> /var/lib/minikube/binaries/v1.26.1/kubectl (48021504 bytes) I0310 09:55:50.835229 3269 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/linux/amd64/kubeadm.sha256 I0310 09:55:50.835380 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubeadm I0310 09:55:50.837807 3269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubelet I0310 09:55:50.847271 3269 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.26.1/kubeadm': No such file or directory I0310 09:55:50.847292 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/linux/amd64/v1.26.1/kubeadm --> /var/lib/minikube/binaries/v1.26.1/kubeadm (46764032 bytes) I0310 09:55:50.847480 3269 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.1/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.26.1/kubelet': No such file or directory I0310 09:55:50.847491 3269 ssh_runner.go:362] scp /home/steve/.minikube/cache/linux/amd64/v1.26.1/kubelet --> /var/lib/minikube/binaries/v1.26.1/kubelet (121256152 bytes) I0310 09:55:51.260566 3269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0310 09:55:51.267219 3269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (440 bytes) I0310 09:55:51.280549 3269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0310 09:55:51.292964 3269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes) I0310 09:55:51.305608 3269 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0310 09:55:51.310411 3269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0310 09:55:51.318709 3269 certs.go:56] Setting up /home/steve/.minikube/profiles/minikube for IP: 192.168.49.2 I0310 09:55:51.318724 3269 certs.go:186] acquiring lock for shared ca certs: {Name:mk2097f3ba5a37ab1c1eca80f667cdde0f9bf329 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.318889 3269 certs.go:195] skipping minikubeCA CA generation: /home/steve/.minikube/ca.key I0310 09:55:51.318934 3269 certs.go:195] skipping proxyClientCA CA generation: /home/steve/.minikube/proxy-client-ca.key I0310 09:55:51.318961 3269 certs.go:315] generating minikube-user signed cert: /home/steve/.minikube/profiles/minikube/client.key I0310 09:55:51.318967 3269 crypto.go:68] Generating cert /home/steve/.minikube/profiles/minikube/client.crt with IP's: [] I0310 09:55:51.360607 3269 crypto.go:156] Writing cert to /home/steve/.minikube/profiles/minikube/client.crt ... I0310 09:55:51.360618 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/client.crt: {Name:mk3f8904aa37866cc83d3cf1a6c0bd3ae348c9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.360783 3269 crypto.go:164] Writing key to /home/steve/.minikube/profiles/minikube/client.key ... I0310 09:55:51.360787 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/client.key: {Name:mkf88deb67443f64b22342d6c2c87809559959f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.360852 3269 certs.go:315] generating minikube signed cert: /home/steve/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0310 09:55:51.360858 3269 crypto.go:68] Generating cert /home/steve/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0310 09:55:51.456900 3269 crypto.go:156] Writing cert to /home/steve/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0310 09:55:51.456912 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk4054822e4f507a70a5adfa5324b6c3e277a708 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.457109 3269 crypto.go:164] Writing key to /home/steve/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0310 09:55:51.457114 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk59d86c26385acb70f6e9abe2c58f1574657f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.457297 3269 certs.go:333] copying /home/steve/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/steve/.minikube/profiles/minikube/apiserver.crt I0310 09:55:51.457543 3269 certs.go:337] copying /home/steve/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/steve/.minikube/profiles/minikube/apiserver.key I0310 09:55:51.457784 3269 certs.go:315] generating aggregator signed cert: /home/steve/.minikube/profiles/minikube/proxy-client.key I0310 09:55:51.457795 3269 crypto.go:68] Generating cert /home/steve/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0310 09:55:51.618322 3269 crypto.go:156] Writing cert to /home/steve/.minikube/profiles/minikube/proxy-client.crt ... I0310 09:55:51.618338 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/proxy-client.crt: {Name:mk3633d924f770c9c32a62b73120246403a59c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.618558 3269 crypto.go:164] Writing key to /home/steve/.minikube/profiles/minikube/proxy-client.key ... I0310 09:55:51.618564 3269 lock.go:35] WriteFile acquiring /home/steve/.minikube/profiles/minikube/proxy-client.key: {Name:mk110e248b1f9413a91f236cc870706fe5eeab59 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:55:51.618776 3269 certs.go:401] found cert: /home/steve/.minikube/certs/home/steve/.minikube/certs/ca-key.pem (1675 bytes) I0310 09:55:51.618807 3269 certs.go:401] found cert: /home/steve/.minikube/certs/home/steve/.minikube/certs/ca.pem (1074 bytes) I0310 09:55:51.618829 3269 certs.go:401] found cert: /home/steve/.minikube/certs/home/steve/.minikube/certs/cert.pem (1119 bytes) I0310 09:55:51.618850 3269 certs.go:401] found cert: /home/steve/.minikube/certs/home/steve/.minikube/certs/key.pem (1679 bytes) I0310 09:55:51.619484 3269 ssh_runner.go:362] scp /home/steve/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0310 09:55:51.646196 3269 ssh_runner.go:362] scp /home/steve/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0310 09:55:51.664145 3269 ssh_runner.go:362] scp /home/steve/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0310 09:55:51.681161 3269 ssh_runner.go:362] scp /home/steve/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0310 09:55:51.699613 3269 ssh_runner.go:362] scp /home/steve/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0310 09:55:51.716124 3269 ssh_runner.go:362] scp /home/steve/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0310 09:55:51.736990 3269 ssh_runner.go:362] scp /home/steve/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0310 09:55:51.754289 3269 ssh_runner.go:362] scp /home/steve/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0310 09:55:51.772215 3269 ssh_runner.go:362] scp /home/steve/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0310 09:55:51.790577 3269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0310 09:55:51.805934 3269 ssh_runner.go:195] Run: openssl version I0310 09:55:51.813564 3269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0310 09:55:51.822960 3269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0310 09:55:51.826956 3269 certs.go:444] hashing: -rw-r--r--. 1 root root 1111 Mar 6 11:09 /usr/share/ca-certificates/minikubeCA.pem I0310 09:55:51.827023 3269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0310 09:55:51.831193 3269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0310 09:55:51.837723 3269 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:localStorageCapacityIsolation Value:false}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/steve:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0310 09:55:51.837871 3269 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0310 09:55:51.857191 3269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0310 09:55:51.864223 3269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0310 09:55:51.871265 3269 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0310 09:55:51.871337 3269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0310 09:55:51.879493 3269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0310 09:55:51.879529 3269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0310 09:55:51.925347 3269 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1 I0310 09:55:51.925504 3269 kubeadm.go:322] [preflight] Running pre-flight checks I0310 09:55:52.032459 3269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0310 09:55:52.032567 3269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0310 09:55:52.032648 3269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0310 09:55:52.150509 3269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0310 09:55:52.167054 3269 out.go:204] ▪ Generating certificates and keys ... I0310 09:55:52.167228 3269 kubeadm.go:322] [certs] Using existing ca certificate authority I0310 09:55:52.167341 3269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0310 09:55:52.229484 3269 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0310 09:55:52.410612 3269 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0310 09:55:52.500315 3269 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0310 09:55:52.558075 3269 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0310 09:55:52.674038 3269 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0310 09:55:52.674150 3269 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0310 09:55:53.016280 3269 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0310 09:55:53.016410 3269 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0310 09:55:53.141450 3269 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0310 09:55:53.298242 3269 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0310 09:55:53.403169 3269 kubeadm.go:322] [certs] Generating "sa" key and public key I0310 09:55:53.403207 3269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0310 09:55:53.465245 3269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0310 09:55:53.546452 3269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0310 09:55:53.581017 3269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0310 09:55:53.755584 3269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0310 09:55:53.772119 3269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0310 09:55:53.773677 3269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0310 09:55:53.773789 3269 kubeadm.go:322] [kubelet-start] Starting the kubelet I0310 09:55:53.897686 3269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0310 09:55:53.914300 3269 out.go:204] ▪ Booting up control plane ... I0310 09:55:53.914477 3269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0310 09:55:53.914586 3269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0310 09:55:53.914631 3269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0310 09:55:53.914691 3269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0310 09:55:53.914788 3269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0310 09:56:10.405250 3269 kubeadm.go:322] [apiclient] All control plane components are healthy after 16.501228 seconds I0310 09:56:10.405337 3269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0310 09:56:10.429119 3269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0310 09:56:10.974676 3269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0310 09:56:10.974785 3269 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0310 09:56:11.493737 3269 kubeadm.go:322] [bootstrap-token] Using token: zg488n.975noo3v7huvvmkg I0310 09:56:11.522666 3269 out.go:204] ▪ Configuring RBAC rules ... I0310 09:56:11.522992 3269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0310 09:56:11.538666 3269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0310 09:56:11.562903 3269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0310 09:56:11.571728 3269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0310 09:56:11.580575 3269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0310 09:56:11.589822 3269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0310 09:56:11.618856 3269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0310 09:56:11.869533 3269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0310 09:56:11.984557 3269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0310 09:56:11.986716 3269 kubeadm.go:322] I0310 09:56:11.986781 3269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0310 09:56:11.986784 3269 kubeadm.go:322] I0310 09:56:11.986830 3269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0310 09:56:11.986831 3269 kubeadm.go:322] I0310 09:56:11.986845 3269 kubeadm.go:322] mkdir -p $HOME/.kube I0310 09:56:11.986883 3269 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0310 09:56:11.986911 3269 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0310 09:56:11.986913 3269 kubeadm.go:322] I0310 09:56:11.986943 3269 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0310 09:56:11.986944 3269 kubeadm.go:322] I0310 09:56:11.986971 3269 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0310 09:56:11.986973 3269 kubeadm.go:322] I0310 09:56:11.987004 3269 kubeadm.go:322] You should now deploy a pod network to the cluster. I0310 09:56:11.987046 3269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0310 09:56:11.987085 3269 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0310 09:56:11.987088 3269 kubeadm.go:322] I0310 09:56:11.987136 3269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0310 09:56:11.987179 3269 kubeadm.go:322] and service account keys on each node and then running the following as root: I0310 09:56:11.987181 3269 kubeadm.go:322] I0310 09:56:11.987227 3269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zg488n.975noo3v7huvvmkg \ I0310 09:56:11.987285 3269 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:51d17d632f392f3e8d068f4210de01367267f3ad92f10680a10d97c6fb8a5e7c \ I0310 09:56:11.987296 3269 kubeadm.go:322] --control-plane I0310 09:56:11.987298 3269 kubeadm.go:322] I0310 09:56:11.987346 3269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0310 09:56:11.987347 3269 kubeadm.go:322] I0310 09:56:11.987393 3269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zg488n.975noo3v7huvvmkg \ I0310 09:56:11.987472 3269 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:51d17d632f392f3e8d068f4210de01367267f3ad92f10680a10d97c6fb8a5e7c I0310 09:56:11.991355 3269 kubeadm.go:322] W0310 09:55:51.917110 1789 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0310 09:56:11.991503 3269 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0310 09:56:11.991598 3269 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0310 09:56:11.991617 3269 cni.go:84] Creating CNI manager for "" I0310 09:56:11.991651 3269 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0310 09:56:11.999466 3269 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0310 09:56:12.004726 3269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0310 09:56:12.016166 3269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0310 09:56:12.029885 3269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0310 09:56:12.030019 3269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0310 09:56:12.030093 3269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=ddac20b4b34a9c8c857fc602203b6ba2679794d3 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_03_10T09_56_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0310 09:56:12.135690 3269 kubeadm.go:1073] duration metric: took 105.726474ms to wait for elevateKubeSystemPrivileges. I0310 09:56:12.135720 3269 ops.go:34] apiserver oom_adj: -16 I0310 09:56:12.135728 3269 kubeadm.go:403] StartCluster complete in 20.298014999s I0310 09:56:12.135741 3269 settings.go:142] acquiring lock: {Name:mk660f3d3ca31c3481fd0c75b13d8f13888fa403 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:56:12.135947 3269 settings.go:150] Updating kubeconfig: /home/steve/.kube/config I0310 09:56:12.137006 3269 lock.go:35] WriteFile acquiring /home/steve/.kube/config: {Name:mke41b13dc462ce6ef68d565498aa9dad0f50c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0310 09:56:12.137274 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0310 09:56:12.137357 3269 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] I0310 09:56:12.137432 3269 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0310 09:56:12.137445 3269 addons.go:227] Setting addon storage-provisioner=true in "minikube" W0310 09:56:12.137449 3269 addons.go:236] addon storage-provisioner should already be in state true I0310 09:56:12.137493 3269 host.go:66] Checking if "minikube" exists ... I0310 09:56:12.137497 3269 addons.go:65] Setting default-storageclass=true in profile "minikube" I0310 09:56:12.137506 3269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0310 09:56:12.137560 3269 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1 I0310 09:56:12.137869 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:56:12.137925 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:56:12.219898 3269 addons.go:227] Setting addon default-storageclass=true in "minikube" W0310 09:56:12.219909 3269 addons.go:236] addon default-storageclass should already be in state true I0310 09:56:12.219931 3269 host.go:66] Checking if "minikube" exists ... I0310 09:56:12.220310 3269 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0310 09:56:12.240493 3269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0310 09:56:12.250061 3269 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0310 09:56:12.264808 3269 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml I0310 09:56:12.264820 3269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0310 09:56:12.264913 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:56:12.323377 3269 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml I0310 09:56:12.323389 3269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0310 09:56:12.323496 3269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0310 09:56:12.385118 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:56:12.410543 3269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker} I0310 09:56:12.501692 3269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0310 09:56:12.516971 3269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0310 09:56:12.682410 3269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0310 09:56:12.682457 3269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0310 09:56:12.698985 3269 out.go:177] 🔎 Verifying Kubernetes components... I0310 09:56:12.705336 3269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0310 09:56:12.855117 3269 start.go:919] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap I0310 09:56:12.961756 3269 api_server.go:51] waiting for apiserver process to appear ... I0310 09:56:12.981373 3269 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0310 09:56:12.981424 3269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0310 09:56:12.991294 3269 addons.go:492] enable addons completed in 853.9227ms: enabled=[storage-provisioner default-storageclass] I0310 09:56:12.994950 3269 api_server.go:71] duration metric: took 312.45697ms to wait for apiserver process to appear ... I0310 09:56:12.994963 3269 api_server.go:87] waiting for apiserver healthz status ... I0310 09:56:12.994972 3269 api_server.go:252] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0310 09:56:12.999465 3269 api_server.go:278] https://192.168.49.2:8443/healthz returned 200: ok I0310 09:56:13.000743 3269 api_server.go:140] control plane version: v1.26.1 I0310 09:56:13.000754 3269 api_server.go:130] duration metric: took 5.787038ms to wait for apiserver health ... I0310 09:56:13.000758 3269 system_pods.go:43] waiting for kube-system pods to appear ... I0310 09:56:13.005777 3269 system_pods.go:59] 5 kube-system pods found I0310 09:56:13.005787 3269 system_pods.go:61] "etcd-minikube" [4a81b458-af9c-40ec-8d33-b16d5b3e3073] Pending I0310 09:56:13.005789 3269 system_pods.go:61] "kube-apiserver-minikube" [e8303c85-a9e0-4383-947e-1e8320b52a65] Pending I0310 09:56:13.005791 3269 system_pods.go:61] "kube-controller-manager-minikube" [57978c92-29d7-4dcb-bccf-f973108cfe92] Pending I0310 09:56:13.005793 3269 system_pods.go:61] "kube-scheduler-minikube" [01880e5f-549e-400c-9e79-748cf8258f26] Pending I0310 09:56:13.005798 3269 system_pods.go:61] "storage-provisioner" [fed22aa6-e038-45ee-bab6-2a45d73e1409] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..) I0310 09:56:13.005802 3269 system_pods.go:74] duration metric: took 5.040746ms to wait for pod list to return data ... I0310 09:56:13.005808 3269 kubeadm.go:578] duration metric: took 323.323754ms to wait for : map[apiserver:true system_pods:true] ... I0310 09:56:13.005816 3269 node_conditions.go:102] verifying NodePressure condition ... I0310 09:56:13.029580 3269 node_conditions.go:122] node storage ephemeral capacity is 0 I0310 09:56:13.029591 3269 node_conditions.go:123] node cpu capacity is 4 I0310 09:56:13.029599 3269 node_conditions.go:105] duration metric: took 23.781147ms to run NodePressure ... I0310 09:56:13.029608 3269 start.go:228] waiting for startup goroutines ... I0310 09:56:13.029612 3269 start.go:233] waiting for cluster config update ... I0310 09:56:13.029618 3269 start.go:240] writing updated cluster config ... I0310 09:56:13.029875 3269 ssh_runner.go:195] Run: rm -f paused I0310 09:56:13.073845 3269 start.go:555] kubectl: 1.25.6, cluster: 1.26.1 (minor skew: 1) I0310 09:56:13.079569 3269 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Fri 2023-03-10 09:55:35 UTC, end at Fri 2023-03-10 09:58:09 UTC. -- Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.770219739Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.770229739Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.771731623Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.771771923Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.771793622Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.771804622Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:37 minikube dockerd[380]: time="2023-03-10T09:55:37.805093068Z" level=info msg="Loading containers: start." Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.011574267Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.082574811Z" level=info msg="Loading containers: done." Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.087869854Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=btrfs version=20.10.23 Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.087928454Z" level=info msg="Daemon has completed initialization" Mar 10 09:55:38 minikube systemd[1]: Started Docker Application Container Engine. Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.115820856Z" level=info msg="API listen on [::]:2376" Mar 10 09:55:38 minikube dockerd[380]: time="2023-03-10T09:55:38.117674837Z" level=info msg="API listen on /var/run/docker.sock" Mar 10 09:55:39 minikube dockerd[380]: time="2023-03-10T09:55:39.234957231Z" level=info msg="Processing signal 'terminated'" Mar 10 09:55:39 minikube dockerd[380]: time="2023-03-10T09:55:39.235938520Z" level=info msg="Daemon shutdown complete" Mar 10 09:55:39 minikube systemd[1]: Stopping Docker Application Container Engine... Mar 10 09:55:39 minikube systemd[1]: docker.service: Succeeded. Mar 10 09:55:39 minikube systemd[1]: Stopped Docker Application Container Engine. Mar 10 09:55:39 minikube systemd[1]: Starting Docker Application Container Engine... Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.418137379Z" level=info msg="Starting up" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.419206767Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.419223967Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.419239167Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.419245067Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.421070747Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.421092647Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.421107447Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.421113847Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.499828108Z" level=info msg="Loading containers: start." Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.699503680Z" level=info msg="Processing signal 'terminated'" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.806053945Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.893191516Z" level=info msg="Loading containers: done." Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.906824071Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=btrfs version=20.10.23 Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.906908470Z" level=info msg="Daemon has completed initialization" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.953573673Z" level=info msg="API listen on [::]:2376" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.958811117Z" level=info msg="API listen on /var/run/docker.sock" Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.959554109Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Mar 10 09:55:39 minikube dockerd[615]: time="2023-03-10T09:55:39.960003504Z" level=info msg="Daemon shutdown complete" Mar 10 09:55:39 minikube systemd[1]: docker.service: Succeeded. Mar 10 09:55:39 minikube systemd[1]: Stopped Docker Application Container Engine. Mar 10 09:55:39 minikube systemd[1]: Starting Docker Application Container Engine... Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.009225880Z" level=info msg="Starting up" Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.011132559Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.011167259Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.011181559Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.011188059Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.012047750Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.012104649Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.012149749Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.012172948Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.353747408Z" level=info msg="Loading containers: start." Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.627900287Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.736939925Z" level=info msg="Loading containers: done." Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.791165747Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23 Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.791255546Z" level=info msg="Daemon has completed initialization" Mar 10 09:55:40 minikube systemd[1]: Started Docker Application Container Engine. Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.856779248Z" level=info msg="API listen on [::]:2376" Mar 10 09:55:40 minikube dockerd[799]: time="2023-03-10T09:55:40.860895304Z" level=info msg="API listen on /var/run/docker.sock" Mar 10 09:56:56 minikube dockerd[799]: time="2023-03-10T09:56:56.037096914Z" level=info msg="ignoring event" container=d880654844704ddc89ec0210b02574175b9fe814f74fb500f93fcaf76ef14a29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID f95b601b59cef 6e38f40d628db About a minute ago Running storage-provisioner 1 d2ae495e6d319 342648ab85a92 5185b96f0becf About a minute ago Running coredns 0 682ec82d2a791 d880654844704 6e38f40d628db About a minute ago Exited storage-provisioner 0 d2ae495e6d319 1c219f33c115c 46a6bb3c77ce0 About a minute ago Running kube-proxy 0 8d929dc88815d 5d13140a62b86 deb04688c4a35 2 minutes ago Running kube-apiserver 0 50207bcb87ecf 1c354b479fcd2 655493523f607 2 minutes ago Running kube-scheduler 0 39475e586e192 9627160809c09 fce326961ae2d 2 minutes ago Running etcd 0 351e4a8cb0cfa 5fff24ab581ab e9c08e11b07f6 2 minutes ago Running kube-controller-manager 0 31b16e0f0fc3e * * ==> coredns [342648ab85a9] <== * .:53 [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [INFO] 127.0.0.1:54843 - 55569 "HINFO IN 1052465385050098417.4242326696941886200. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.291666492s * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=ddac20b4b34a9c8c857fc602203b6ba2679794d3 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_03_10T09_56_12_0700 minikube.k8s.io/version=v1.29.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 10 Mar 2023 09:56:08 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Fri, 10 Mar 2023 09:58:04 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 10 Mar 2023 09:56:33 +0000 Fri, 10 Mar 2023 09:56:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 10 Mar 2023 09:56:33 +0000 Fri, 10 Mar 2023 09:56:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 10 Mar 2023 09:56:33 +0000 Fri, 10 Mar 2023 09:56:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 10 Mar 2023 09:56:33 +0000 Fri, 10 Mar 2023 09:56:12 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32869416Ki pods: 110 Allocatable: cpu: 4 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32869416Ki pods: 110 System Info: Machine ID: f1a46cb41c9d45969ef9bdf4a48d9b28 System UUID: 0e4d0ca8-e77b-4dca-a653-327b1238c7d8 Boot ID: 307951f6-9213-481b-a672-4571478c9b75 Kernel Version: 6.1.14-200.fc37.x86_64 OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.23 Kubelet Version: v1.26.1 Kube-Proxy Version: v1.26.1 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-787d4945fb-qbb42 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 105s kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 117s kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s kube-system kube-proxy-xqp65 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 105s kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (18%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 103s kube-proxy Normal NodeHasSufficientMemory 2m15s (x6 over 2m15s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m15s (x5 over 2m15s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m15s (x5 over 2m15s) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 118s kubelet Starting kubelet. Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 117s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 117s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 117s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeReady 117s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 105s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [Mar10 09:53] PCI: System does not support PCI [ +0.390065] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ +2.330782] systemd-sysv-generator[553]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000087] systemd-sysv-generator[553]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.010697] systemd-gpt-auto-generator[546]: Failed to dissect: Permission denied [ +0.000759] (sd-executor)[531]: /usr/lib/systemd/system-generators/systemd-gpt-auto-generator failed with exit status 1. [ +4.228589] CIFS: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 on mount. [ +10.393161] CIFS: VFS: Error connecting to socket. Aborting operation. [ +0.000009] CIFS: VFS: cifs_mount failed w/return code = -115 [ +0.259580] CIFS: VFS: \\clearswift.org Send error in SessSetup = -13 [ +0.000036] CIFS: VFS: cifs_mount failed w/return code = -13 [Mar10 09:54] hv_storvsc dd674bd3-eb2e-4a62-befb-722b6146feba: tag#138 cmd 0x43 status: scsi 0x2 srb 0x84 hv 0xc0000001 * * ==> etcd [9627160809c0] <== * {"level":"info","ts":"2023-03-10T09:56:05.924Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-03-10T09:56:05.924Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-03-10T09:56:05.924Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-03-10T09:56:05.924Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2023-03-10T09:56:05.924Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-03-10T09:56:05.938Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"13.051761ms"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2023-03-10T09:56:05.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2023-03-10T09:56:06.008Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-03-10T09:56:06.044Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-03-10T09:56:06.063Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-03-10T09:56:06.084Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-03-10T09:56:06.085Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-03-10T09:56:06.085Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-03-10T09:56:06.085Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-03-10T09:56:06.085Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-03-10T09:56:06.086Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-03-10T09:56:06.086Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-03-10T09:56:06.086Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-03-10T09:56:06.086Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-03-10T09:56:06.086Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2023-03-10T09:56:06.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2023-03-10T09:56:06.104Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-03-10T09:56:06.125Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2023-03-10T09:56:06.125Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-03-10T09:56:06.125Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-03-10T09:56:06.125Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-03-10T09:56:06.125Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-03-10T09:56:06.126Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2023-03-10T09:56:06.127Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2023-03-10T09:56:06.136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2023-03-10T09:56:06.137Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-03-10T09:56:06.137Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} * * ==> kernel <== * 09:58:09 up 5 min, 0 users, load average: 0.38, 0.63, 0.32 Linux minikube 6.1.14-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Feb 26 00:13:26 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kube-apiserver [5d13140a62b8] <== * W0310 09:56:07.327497 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0310 09:56:07.896151 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0310 09:56:07.896179 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0310 09:56:07.896406 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0310 09:56:07.896596 1 secure_serving.go:210] Serving securely on [::]:8443 I0310 09:56:07.896646 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0310 09:56:07.896877 1 controller.go:83] Starting OpenAPI AggregationController I0310 09:56:07.896967 1 gc_controller.go:78] Starting apiserver lease garbage collector I0310 09:56:07.897063 1 controller.go:80] Starting OpenAPI V3 AggregationController I0310 09:56:07.897295 1 autoregister_controller.go:141] Starting autoregister controller I0310 09:56:07.897505 1 cache.go:32] Waiting for caches to sync for autoregister controller I0310 09:56:07.897721 1 apf_controller.go:361] Starting API Priority and Fairness config controller I0310 09:56:07.897941 1 available_controller.go:494] Starting AvailableConditionController I0310 09:56:07.897947 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0310 09:56:07.898188 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0310 09:56:07.898195 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller I0310 09:56:07.898709 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0310 09:56:07.898718 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister I0310 09:56:07.898758 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0310 09:56:07.898840 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0310 09:56:07.900614 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0310 09:56:07.900714 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0310 09:56:07.900953 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0310 09:56:07.904608 1 controller.go:121] Starting legacy_token_tracking_controller I0310 09:56:07.905996 1 shared_informer.go:273] Waiting for caches to sync for configmaps I0310 09:56:07.904685 1 customresource_discovery_controller.go:288] Starting DiscoveryController I0310 09:56:07.906289 1 naming_controller.go:291] Starting NamingConditionController I0310 09:56:07.906317 1 establishing_controller.go:76] Starting EstablishingController I0310 09:56:07.906333 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0310 09:56:07.906361 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0310 09:56:07.906384 1 crd_finalizer.go:266] Starting CRDFinalizer I0310 09:56:07.906255 1 controller.go:85] Starting OpenAPI controller I0310 09:56:07.906281 1 controller.go:85] Starting OpenAPI V3 controller I0310 09:56:07.919749 1 controller.go:615] quota admission added evaluator for: namespaces I0310 09:56:07.971586 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0310 09:56:07.980691 1 shared_informer.go:280] Caches are synced for node_authorizer I0310 09:56:07.997948 1 apf_controller.go:366] Running API Priority and Fairness config worker I0310 09:56:07.997971 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process I0310 09:56:07.997997 1 cache.go:39] Caches are synced for autoregister controller I0310 09:56:07.998010 1 cache.go:39] Caches are synced for AvailableConditionController controller I0310 09:56:07.998233 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller I0310 09:56:07.998752 1 shared_informer.go:280] Caches are synced for crd-autoregister I0310 09:56:08.001734 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0310 09:56:08.006539 1 shared_informer.go:280] Caches are synced for configmaps I0310 09:56:08.645329 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0310 09:56:08.918246 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0310 09:56:08.929908 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0310 09:56:08.930025 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0310 09:56:10.084174 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0310 09:56:10.176882 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0310 09:56:10.358957 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0310 09:56:10.375061 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0310 09:56:10.376083 1 controller.go:615] quota admission added evaluator for: endpoints I0310 09:56:10.397566 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0310 09:56:10.945595 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0310 09:56:11.834276 1 controller.go:615] quota admission added evaluator for: deployments.apps I0310 09:56:11.867660 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0310 09:56:11.885446 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0310 09:56:24.161165 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps I0310 09:56:24.623540 1 controller.go:615] quota admission added evaluator for: replicasets.apps * * ==> kube-controller-manager [5fff24ab581a] <== * I0310 09:56:24.085127 1 job_controller.go:191] Starting job controller I0310 09:56:24.085148 1 shared_informer.go:273] Waiting for caches to sync for job I0310 09:56:24.097789 1 controllermanager.go:622] Started "deployment" I0310 09:56:24.097968 1 deployment_controller.go:154] "Starting controller" controller="deployment" I0310 09:56:24.098030 1 shared_informer.go:273] Waiting for caches to sync for deployment I0310 09:56:24.102349 1 shared_informer.go:273] Waiting for caches to sync for resource quota W0310 09:56:24.109908 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0310 09:56:24.111810 1 shared_informer.go:280] Caches are synced for node I0310 09:56:24.112019 1 range_allocator.go:167] Sending events to api server. I0310 09:56:24.112138 1 range_allocator.go:171] Starting range CIDR allocator I0310 09:56:24.112204 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator I0310 09:56:24.112257 1 shared_informer.go:280] Caches are synced for cidrallocator I0310 09:56:24.113451 1 shared_informer.go:280] Caches are synced for taint I0310 09:56:24.113520 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: W0310 09:56:24.113608 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube. Assuming now as a timestamp. I0310 09:56:24.113660 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal. I0310 09:56:24.113931 1 taint_manager.go:206] "Starting NoExecuteTaintManager" I0310 09:56:24.113978 1 taint_manager.go:211] "Sending events to api server" I0310 09:56:24.114460 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0310 09:56:24.115905 1 shared_informer.go:280] Caches are synced for certificate-csrapproving I0310 09:56:24.118781 1 shared_informer.go:273] Waiting for caches to sync for garbage collector I0310 09:56:24.120594 1 shared_informer.go:280] Caches are synced for expand I0310 09:56:24.121979 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown I0310 09:56:24.122039 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving I0310 09:56:24.122076 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client I0310 09:56:24.122125 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client I0310 09:56:24.126754 1 shared_informer.go:280] Caches are synced for ephemeral I0310 09:56:24.137980 1 range_allocator.go:372] Set node minikube PodCIDR to [10.244.0.0/24] I0310 09:56:24.144059 1 shared_informer.go:280] Caches are synced for disruption I0310 09:56:24.148442 1 shared_informer.go:280] Caches are synced for daemon sets I0310 09:56:24.148461 1 shared_informer.go:280] Caches are synced for persistent volume I0310 09:56:24.153766 1 shared_informer.go:280] Caches are synced for namespace I0310 09:56:24.157374 1 shared_informer.go:280] Caches are synced for TTL after finished I0310 09:56:24.158551 1 shared_informer.go:280] Caches are synced for stateful set I0310 09:56:24.158683 1 shared_informer.go:280] Caches are synced for PVC protection I0310 09:56:24.161752 1 shared_informer.go:280] Caches are synced for ReplicaSet I0310 09:56:24.165051 1 shared_informer.go:280] Caches are synced for HPA I0310 09:56:24.168977 1 shared_informer.go:280] Caches are synced for TTL I0310 09:56:24.169005 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator I0310 09:56:24.170936 1 shared_informer.go:280] Caches are synced for endpoint_slice I0310 09:56:24.172173 1 shared_informer.go:280] Caches are synced for ReplicationController I0310 09:56:24.180868 1 shared_informer.go:280] Caches are synced for GC I0310 09:56:24.184616 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xqp65" I0310 09:56:24.185469 1 shared_informer.go:280] Caches are synced for job I0310 09:56:24.187942 1 shared_informer.go:280] Caches are synced for bootstrap_signer I0310 09:56:24.198091 1 shared_informer.go:280] Caches are synced for service account I0310 09:56:24.198247 1 shared_informer.go:280] Caches are synced for deployment I0310 09:56:24.199141 1 shared_informer.go:280] Caches are synced for cronjob I0310 09:56:24.201373 1 shared_informer.go:280] Caches are synced for crt configmap I0310 09:56:24.208926 1 shared_informer.go:280] Caches are synced for PV protection I0310 09:56:24.248480 1 shared_informer.go:280] Caches are synced for attach detach I0310 09:56:24.303068 1 shared_informer.go:280] Caches are synced for resource quota I0310 09:56:24.303086 1 shared_informer.go:280] Caches are synced for resource quota I0310 09:56:24.337954 1 shared_informer.go:280] Caches are synced for endpoint I0310 09:56:24.398634 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring I0310 09:56:24.631789 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1" I0310 09:56:24.719026 1 shared_informer.go:280] Caches are synced for garbage collector I0310 09:56:24.741452 1 shared_informer.go:280] Caches are synced for garbage collector I0310 09:56:24.741486 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0310 09:56:24.813932 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-qbb42" * * ==> kube-proxy [1c219f33c115] <== * I0310 09:56:26.075784 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0310 09:56:26.075909 1 server_others.go:109] "Detected node IP" address="192.168.49.2" I0310 09:56:26.075944 1 server_others.go:535] "Using iptables proxy" I0310 09:56:26.122634 1 server_others.go:176] "Using iptables Proxier" I0310 09:56:26.122715 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0310 09:56:26.122724 1 server_others.go:184] "Creating dualStackProxier for iptables" I0310 09:56:26.122745 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0310 09:56:26.122788 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0310 09:56:26.123116 1 server.go:655] "Version info" version="v1.26.1" I0310 09:56:26.123128 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0310 09:56:26.124200 1 config.go:317] "Starting service config controller" I0310 09:56:26.124222 1 shared_informer.go:273] Waiting for caches to sync for service config I0310 09:56:26.124242 1 config.go:226] "Starting endpoint slice config controller" I0310 09:56:26.124245 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config I0310 09:56:26.124748 1 config.go:444] "Starting node config controller" I0310 09:56:26.124754 1 shared_informer.go:273] Waiting for caches to sync for node config I0310 09:56:26.225766 1 shared_informer.go:280] Caches are synced for node config I0310 09:56:26.225782 1 shared_informer.go:280] Caches are synced for endpoint slice config I0310 09:56:26.225798 1 shared_informer.go:280] Caches are synced for service config * * ==> kube-scheduler [1c354b479fcd] <== * E0310 09:56:07.939444 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0310 09:56:07.939763 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0310 09:56:07.939789 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0310 09:56:07.939853 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0310 09:56:07.939869 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0310 09:56:07.940066 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0310 09:56:07.940469 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0310 09:56:07.940062 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0310 09:56:07.940561 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0310 09:56:07.940249 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0310 09:56:07.940572 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0310 09:56:07.940274 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0310 09:56:07.940596 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0310 09:56:07.940855 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0310 09:56:07.940886 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0310 09:56:07.940944 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0310 09:56:07.940959 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0310 09:56:07.941003 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0310 09:56:07.941058 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0310 09:56:07.941079 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0310 09:56:07.941092 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0310 09:56:07.941213 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0310 09:56:07.941243 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0310 09:56:07.941912 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0310 09:56:07.941950 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0310 09:56:07.941916 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0310 09:56:07.941973 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0310 09:56:07.942913 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0310 09:56:07.942946 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0310 09:56:08.784021 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0310 09:56:08.784065 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0310 09:56:08.813306 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0310 09:56:08.813371 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0310 09:56:08.889211 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0310 09:56:08.889287 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0310 09:56:09.062449 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0310 09:56:09.062485 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0310 09:56:09.187272 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0310 09:56:09.187298 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0310 09:56:09.212544 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0310 09:56:09.212583 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0310 09:56:09.232800 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0310 09:56:09.232843 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0310 09:56:09.278166 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0310 09:56:09.278195 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0310 09:56:09.284029 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0310 09:56:09.284058 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0310 09:56:09.299965 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0310 09:56:09.299996 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0310 09:56:09.340128 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0310 09:56:09.340164 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0310 09:56:09.340194 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0310 09:56:09.340208 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0310 09:56:09.370212 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0310 09:56:09.370242 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0310 09:56:09.464606 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0310 09:56:09.464879 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0310 09:56:09.504927 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0310 09:56:09.504961 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I0310 09:56:11.037980 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Fri 2023-03-10 09:55:35 UTC, end at Fri 2023-03-10 09:58:09 UTC. -- Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.955304 2650 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.955335 2650 state_mem.go:36] "Initialized new in-memory state store" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.955583 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.955619 2650 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.955628 2650 policy_none.go:49] "None policy: Start" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.967738 2650 memory_manager.go:169] "Starting memorymanager" policy="None" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.967776 2650 state_mem.go:35] "Initializing new in-memory state store" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.967965 2650 state_mem.go:75] "Updated machine memory state" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.977529 2650 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 10 09:56:11 minikube kubelet[2650]: I0310 09:56:11.977817 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.003974 2650 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.052087 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.052557 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.052613 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.052647 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198466 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198550 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198587 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5239bb256c1be9f71fd10c884d9299b1-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5239bb256c1be9f71fd10c884d9299b1\") " pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198610 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5239bb256c1be9f71fd10c884d9299b1-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5239bb256c1be9f71fd10c884d9299b1\") " pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198636 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198662 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198685 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/197cd0de602d7cb722d0bd2daf878121-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"197cd0de602d7cb722d0bd2daf878121\") " pod="kube-system/kube-scheduler-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198709 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a121e106627e5c6efa9ba48006cc43bf-etcd-data\") pod \"etcd-minikube\" (UID: \"a121e106627e5c6efa9ba48006cc43bf\") " pod="kube-system/etcd-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.198741 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5239bb256c1be9f71fd10c884d9299b1-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5239bb256c1be9f71fd10c884d9299b1\") " pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199360 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5239bb256c1be9f71fd10c884d9299b1-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"5239bb256c1be9f71fd10c884d9299b1\") " pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199398 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199433 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a121e106627e5c6efa9ba48006cc43bf-etcd-certs\") pod \"etcd-minikube\" (UID: \"a121e106627e5c6efa9ba48006cc43bf\") " pod="kube-system/etcd-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199457 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199481 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5239bb256c1be9f71fd10c884d9299b1-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"5239bb256c1be9f71fd10c884d9299b1\") " pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.199503 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5175bba984ed52052d891b5a45b584b6-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"5175bba984ed52052d891b5a45b584b6\") " pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.479107 2650 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.479215 2650 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Mar 10 09:56:12 minikube kubelet[2650]: I0310 09:56:12.876715 2650 apiserver.go:52] "Watching apiserver" Mar 10 09:56:13 minikube kubelet[2650]: I0310 09:56:13.096749 2650 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Mar 10 09:56:13 minikube kubelet[2650]: I0310 09:56:13.104019 2650 reconciler.go:41] "Reconciler: start to sync state" Mar 10 09:56:13 minikube kubelet[2650]: E0310 09:56:13.496055 2650 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Mar 10 09:56:13 minikube kubelet[2650]: E0310 09:56:13.688056 2650 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Mar 10 09:56:13 minikube kubelet[2650]: E0310 09:56:13.890755 2650 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Mar 10 09:56:14 minikube kubelet[2650]: I0310 09:56:14.489187 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-minikube" podStartSLOduration=2.489127354 pod.CreationTimestamp="2023-03-10 09:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:14.088069127 +0000 UTC m=+2.308821597" watchObservedRunningTime="2023-03-10 09:56:14.489127354 +0000 UTC m=+2.709879824" Mar 10 09:56:14 minikube kubelet[2650]: I0310 09:56:14.896582 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-minikube" podStartSLOduration=2.896524112 pod.CreationTimestamp="2023-03-10 09:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:14.896490713 +0000 UTC m=+3.117243183" watchObservedRunningTime="2023-03-10 09:56:14.896524112 +0000 UTC m=+3.117276582" Mar 10 09:56:15 minikube kubelet[2650]: I0310 09:56:15.288462 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=3.2883826369999998 pod.CreationTimestamp="2023-03-10 09:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:15.288295037 +0000 UTC m=+3.509047507" watchObservedRunningTime="2023-03-10 09:56:15.288382637 +0000 UTC m=+3.509135107" Mar 10 09:56:23 minikube kubelet[2650]: I0310 09:56:23.303454 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=11.303386427 pod.CreationTimestamp="2023-03-10 09:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:15.693784216 +0000 UTC m=+3.914536786" watchObservedRunningTime="2023-03-10 09:56:23.303386427 +0000 UTC m=+11.524138997" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.196859 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.380916 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.387918 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adb8cec6-07c1-40d6-94da-8effb4dc3412-kube-proxy\") pod \"kube-proxy-xqp65\" (UID: \"adb8cec6-07c1-40d6-94da-8effb4dc3412\") " pod="kube-system/kube-proxy-xqp65" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.387966 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adb8cec6-07c1-40d6-94da-8effb4dc3412-xtables-lock\") pod \"kube-proxy-xqp65\" (UID: \"adb8cec6-07c1-40d6-94da-8effb4dc3412\") " pod="kube-system/kube-proxy-xqp65" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.387983 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqpv\" (UniqueName: \"kubernetes.io/projected/fed22aa6-e038-45ee-bab6-2a45d73e1409-kube-api-access-qjqpv\") pod \"storage-provisioner\" (UID: \"fed22aa6-e038-45ee-bab6-2a45d73e1409\") " pod="kube-system/storage-provisioner" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.388002 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adb8cec6-07c1-40d6-94da-8effb4dc3412-lib-modules\") pod \"kube-proxy-xqp65\" (UID: \"adb8cec6-07c1-40d6-94da-8effb4dc3412\") " pod="kube-system/kube-proxy-xqp65" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.388029 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ggn\" (UniqueName: \"kubernetes.io/projected/adb8cec6-07c1-40d6-94da-8effb4dc3412-kube-api-access-68ggn\") pod \"kube-proxy-xqp65\" (UID: \"adb8cec6-07c1-40d6-94da-8effb4dc3412\") " pod="kube-system/kube-proxy-xqp65" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.388058 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fed22aa6-e038-45ee-bab6-2a45d73e1409-tmp\") pod \"storage-provisioner\" (UID: \"fed22aa6-e038-45ee-bab6-2a45d73e1409\") " pod="kube-system/storage-provisioner" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.831512 2650 topology_manager.go:210] "Topology Admit Handler" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.990541 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7a03750-97cd-44c8-b8eb-05839e9c10dc-config-volume\") pod \"coredns-787d4945fb-qbb42\" (UID: \"e7a03750-97cd-44c8-b8eb-05839e9c10dc\") " pod="kube-system/coredns-787d4945fb-qbb42" Mar 10 09:56:24 minikube kubelet[2650]: I0310 09:56:24.990616 2650 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qmj\" (UniqueName: \"kubernetes.io/projected/e7a03750-97cd-44c8-b8eb-05839e9c10dc-kube-api-access-f2qmj\") pod \"coredns-787d4945fb-qbb42\" (UID: \"e7a03750-97cd-44c8-b8eb-05839e9c10dc\") " pod="kube-system/coredns-787d4945fb-qbb42" Mar 10 09:56:26 minikube kubelet[2650]: I0310 09:56:26.322820 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="682ec82d2a791ba0732b2ffd024488b0090e91c7732323bdb083ca1e574dab1d" Mar 10 09:56:27 minikube kubelet[2650]: I0310 09:56:27.294253 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.2942076 pod.CreationTimestamp="2023-03-10 09:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:27.294080201 +0000 UTC m=+15.514832671" watchObservedRunningTime="2023-03-10 09:56:27.2942076 +0000 UTC m=+15.514960070" Mar 10 09:56:28 minikube kubelet[2650]: I0310 09:56:28.099889 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qbb42" podStartSLOduration=4.099824315 pod.CreationTimestamp="2023-03-10 09:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:28.099643617 +0000 UTC m=+16.320396087" watchObservedRunningTime="2023-03-10 09:56:28.099824315 +0000 UTC m=+16.320576785" Mar 10 09:56:28 minikube kubelet[2650]: I0310 09:56:28.100281 2650 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xqp65" podStartSLOduration=4.10024921 pod.CreationTimestamp="2023-03-10 09:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-10 09:56:27.703856134 +0000 UTC m=+15.924608704" watchObservedRunningTime="2023-03-10 09:56:28.10024921 +0000 UTC m=+16.321001780" Mar 10 09:56:33 minikube kubelet[2650]: I0310 09:56:33.055111 2650 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Mar 10 09:56:33 minikube kubelet[2650]: I0310 09:56:33.055951 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Mar 10 09:56:56 minikube kubelet[2650]: I0310 09:56:56.528956 2650 scope.go:115] "RemoveContainer" containerID="d880654844704ddc89ec0210b02574175b9fe814f74fb500f93fcaf76ef14a29" * * ==> storage-provisioner [d88065484470] <== * I0310 09:56:26.017098 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0310 09:56:56.018751 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [f95b601b59ce] <== * I0310 09:56:56.759834 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0310 09:56:56.764685 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0310 09:56:56.764719 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0310 09:56:56.778257 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0310 09:56:56.778325 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8706bc4f-4b4b-4162-bfec-88c294c9f64b", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_234d468e-3e23-4ed2-80a6-0442aadedda1 became leader I0310 09:56:56.778403 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_234d468e-3e23-4ed2-80a6-0442aadedda1! I0310 09:56:56.879260 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_234d468e-3e23-4ed2-80a6-0442aadedda1!