* * ==> Audit <== * |---------|---------------------------------------|----------|------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------|----------|------|---------|---------------------|---------------------| | start | | minikube | mnk | v1.28.0 | 02 Jan 23 09:45 EST | 02 Jan 23 09:46 EST | | ssh | | minikube | mnk | v1.28.0 | 02 Jan 23 09:46 EST | 02 Jan 23 09:53 EST | |---------|---------------------------------------|----------|------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/01/02 09:45:42 Running on machine: hello-world Binary: Built with gc go1.19.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0102 09:45:42.410558 2747 out.go:296] Setting OutFile to fd 1 ... I0102 09:45:42.410698 2747 out.go:348] isatty.IsTerminal(1) = true I0102 09:45:42.410702 2747 out.go:309] Setting ErrFile to fd 2... I0102 09:45:42.410707 2747 out.go:348] isatty.IsTerminal(2) = true I0102 09:45:42.410793 2747 root.go:334] Updating PATH: /home/mnk/.minikube/bin I0102 09:45:42.411692 2747 out.go:303] Setting JSON to false I0102 09:45:42.412838 2747 start.go:116] hostinfo: {"hostname":"hello-world.info","uptime":81,"bootTime":1672670661,"procs":300,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"37","kernelVersion":"6.0.15-300.fc37.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"ea7c7ce6-1d9f-4df1-85de-1b6296785e47"} I0102 09:45:42.412900 2747 start.go:126] virtualization: kvm guest I0102 09:45:42.429176 2747 out.go:177] ๐Ÿ˜„ minikube v1.28.0 on Fedora 37 (kvm/amd64) I0102 09:45:42.440814 2747 notify.go:220] Checking for updates... I0102 09:45:42.441363 2747 driver.go:365] Setting default libvirt URI to qemu:///system I0102 09:45:42.441404 2747 global.go:111] Querying for installed drivers using PATH=/home/mnk/.minikube/bin:/home/mnk/.local/bin:/home/mnk/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin I0102 09:45:43.510420 2747 docker.go:137] docker version: linux-20.10.21 I0102 09:45:43.510650 2747 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0102 09:45:43.537738 2747 info.go:266] docker info: {ID:HF2H:4F5F:3AZQ:TCDL:UGCY:GCIL:V7CB:W7XM:TPYQ:NOPA:PSWT:GRK6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:42 SystemTime:2023-01-02 09:45:43.527564726 -0500 EST LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:6.0.15-300.fc37.x86_64 OperatingSystem:Fedora Linux 37 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:20973977600 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:hello-world.info Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:true Isolation: InitBinary:/usr/libexec/docker/docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default name=selinux name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0102 09:45:43.537801 2747 docker.go:254] overlay module found I0102 09:45:43.537808 2747 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0102 09:45:43.537854 2747 global.go:119] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0102 09:45:43.548923 2747 global.go:119] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc: Version:} W0102 09:45:43.555826 2747 podman.go:138] podman returned error: exit status 1 I0102 09:45:43.555850 2747 global.go:119] podman default: true priority: 7, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"sudo -n -k podman version --format {{.Version}}" exit status 1: sudo: a password is required Reason: Fix:Add your user to the 'sudoers' file: 'mnk ALL=(ALL) NOPASSWD: /usr/bin/podman' , or run 'minikube config set rootless true' Doc:https://podman.io Version:} I0102 09:45:43.556801 2747 global.go:119] qemu2 default: true priority: 3, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0102 09:45:43.556815 2747 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0102 09:45:43.556869 2747 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0102 09:45:43.556913 2747 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0102 09:45:43.556937 2747 driver.go:300] not recommending "ssh" due to default: false I0102 09:45:43.556942 2747 driver.go:305] not recommending "qemu2" due to priority: 3 I0102 09:45:43.556954 2747 driver.go:295] not recommending "podman" due to health: "sudo -n -k podman version --format {{.Version}}" exit status 1: sudo: a password is required I0102 09:45:43.556968 2747 driver.go:335] Picked: docker I0102 09:45:43.556972 2747 driver.go:336] Alternatives: [ssh qemu2 (experimental)] I0102 09:45:43.556979 2747 driver.go:337] Rejects: [kvm2 none podman virtualbox vmware] I0102 09:45:43.561440 2747 out.go:177] โœจ Automatically selected the docker driver. Other choices: ssh, qemu2 (experimental) I0102 09:45:43.565517 2747 start.go:282] selected driver: docker I0102 09:45:43.565531 2747 start.go:808] validating driver "docker" against I0102 09:45:43.565547 2747 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0102 09:45:43.565617 2747 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0102 09:45:43.589545 2747 info.go:266] docker info: {ID:HF2H:4F5F:3AZQ:TCDL:UGCY:GCIL:V7CB:W7XM:TPYQ:NOPA:PSWT:GRK6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:42 SystemTime:2023-01-02 09:45:43.581297315 -0500 EST LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:6.0.15-300.fc37.x86_64 OperatingSystem:Fedora Linux 37 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:20973977600 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:hello-world.info Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:true Isolation: InitBinary:/usr/libexec/docker/docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default name=selinux name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0102 09:45:43.589625 2747 start_flags.go:303] no existing cluster config was found, will generate one from the flags I0102 09:45:43.590182 2747 start_flags.go:384] Using suggested 5000MB memory alloc based on sys=20002MB, container=20002MB I0102 09:45:43.590300 2747 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true] I0102 09:45:43.595111 2747 out.go:177] ๐Ÿ“Œ Using Docker driver with root privileges I0102 09:45:43.599558 2747 cni.go:95] Creating CNI manager for "" I0102 09:45:43.599567 2747 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0102 09:45:43.599574 2747 start_flags.go:317] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mnk:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0102 09:45:43.614151 2747 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0102 09:45:43.618699 2747 cache.go:120] Beginning downloading kic base image for docker with docker I0102 09:45:43.623542 2747 out.go:177] ๐Ÿšœ Pulling base image ... I0102 09:45:43.628376 2747 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0102 09:45:43.628467 2747 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon I0102 09:45:43.628497 2747 preload.go:148] Found local preload: /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 I0102 09:45:43.628508 2747 cache.go:57] Caching tarball of preloaded images I0102 09:45:43.628879 2747 preload.go:174] Found /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0102 09:45:43.628895 2747 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker I0102 09:45:43.637831 2747 profile.go:148] Saving config to /home/mnk/.minikube/profiles/minikube/config.json ... I0102 09:45:43.637885 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/config.json: {Name:mk356f9daa1cdba8005cb1c4d95d1c6a034396c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:43.661619 2747 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull I0102 09:45:43.661651 2747 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load I0102 09:45:43.661670 2747 cache.go:208] Successfully downloaded all kic artifacts I0102 09:45:43.661718 2747 start.go:364] acquiring machines lock for minikube: {Name:mkd6d59ba44586d46ed01a95daa50bdb940f61d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0102 09:45:43.661806 2747 start.go:368] acquired machines lock for "minikube" in 76.316ยตs I0102 09:45:43.662035 2747 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mnk:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0102 09:45:43.662089 2747 start.go:125] createHost starting for "" (driver="docker") I0102 09:45:43.673103 2747 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=5000MB) ... I0102 09:45:43.673598 2747 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0102 09:45:43.673618 2747 client.go:168] LocalClient.Create starting I0102 09:45:43.673918 2747 main.go:134] libmachine: Reading certificate data from /home/mnk/.minikube/certs/ca.pem I0102 09:45:43.674142 2747 main.go:134] libmachine: Decoding PEM data... I0102 09:45:43.674160 2747 main.go:134] libmachine: Parsing certificate... I0102 09:45:43.674228 2747 main.go:134] libmachine: Reading certificate data from /home/mnk/.minikube/certs/cert.pem I0102 09:45:43.674354 2747 main.go:134] libmachine: Decoding PEM data... I0102 09:45:43.674363 2747 main.go:134] libmachine: Parsing certificate... I0102 09:45:43.674707 2747 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0102 09:45:43.713910 2747 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0102 09:45:43.713958 2747 network_create.go:272] running [docker network inspect minikube] to gather additional debugging logs... I0102 09:45:43.713968 2747 cli_runner.go:164] Run: docker network inspect minikube W0102 09:45:43.740166 2747 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0102 09:45:43.740494 2747 network_create.go:275] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0102 09:45:43.740508 2747 network_create.go:277] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0102 09:45:43.740689 2747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0102 09:45:43.761745 2747 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b360e0] misses:0} I0102 09:45:43.761777 2747 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0102 09:45:43.761790 2747 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0102 09:45:43.761839 2747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0102 09:45:43.927956 2747 network_create.go:99] docker network minikube 192.168.49.0/24 created I0102 09:45:43.927976 2747 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0102 09:45:43.928132 2747 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0102 09:45:43.950155 2747 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0102 09:45:43.971022 2747 oci.go:103] Successfully created a docker volume minikube I0102 09:45:43.971077 2747 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib I0102 09:45:45.056397 2747 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (1.085286947s) I0102 09:45:45.056410 2747 oci.go:107] Successfully prepared a docker volume minikube I0102 09:45:45.056423 2747 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0102 09:45:45.056438 2747 kic.go:179] Starting extracting preloaded images to volume ... I0102 09:45:45.056612 2747 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir W0102 09:45:45.746746 2747 cli_runner.go:211] docker run --rm --entrypoint /usr/bin/tar -v /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 2 I0102 09:45:45.746791 2747 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 2 stdout: stderr: tar (child): /preloaded.tar: Cannot open: Permission denied tar (child): Error is not recoverable: exiting now /usr/bin/tar: Child returned status 2 /usr/bin/tar: Error is not recoverable: exiting now W0102 09:45:45.746993 2747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0102 09:45:45.747070 2747 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0102 09:45:45.747100 2747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0102 09:45:45.773294 2747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=5000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 I0102 09:45:46.344314 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0102 09:45:46.369175 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:45:46.389996 2747 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0102 09:45:46.455914 2747 oci.go:144] the created container "minikube" has a running status. I0102 09:45:46.455937 2747 kic.go:210] Creating ssh key for kic: /home/mnk/.minikube/machines/minikube/id_rsa... I0102 09:45:46.658908 2747 kic_runner.go:191] docker (temp): /home/mnk/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0102 09:45:46.762014 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:45:46.788140 2747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0102 09:45:46.788150 2747 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0102 09:45:46.840021 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:45:46.873268 2747 machine.go:88] provisioning docker machine ... I0102 09:45:46.873310 2747 ubuntu.go:169] provisioning hostname "minikube" I0102 09:45:46.873370 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:46.893660 2747 main.go:134] libmachine: Using SSH client type: native I0102 09:45:46.893902 2747 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7ed4e0] 0x7f0660 [] 0s} 127.0.0.1 49157 } I0102 09:45:46.893910 2747 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0102 09:45:47.045626 2747 main.go:134] libmachine: SSH cmd err, output: : minikube I0102 09:45:47.045722 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:47.076824 2747 main.go:134] libmachine: Using SSH client type: native I0102 09:45:47.076988 2747 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7ed4e0] 0x7f0660 [] 0s} 127.0.0.1 49157 } I0102 09:45:47.077000 2747 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0102 09:45:47.186691 2747 main.go:134] libmachine: SSH cmd err, output: : I0102 09:45:47.186753 2747 ubuntu.go:175] set auth options {CertDir:/home/mnk/.minikube CaCertPath:/home/mnk/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mnk/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mnk/.minikube/machines/server.pem ServerKeyPath:/home/mnk/.minikube/machines/server-key.pem ClientKeyPath:/home/mnk/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mnk/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mnk/.minikube} I0102 09:45:47.186771 2747 ubuntu.go:177] setting up certificates I0102 09:45:47.186780 2747 provision.go:83] configureAuth start I0102 09:45:47.186827 2747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0102 09:45:47.208102 2747 provision.go:138] copyHostCerts I0102 09:45:47.208186 2747 exec_runner.go:144] found /home/mnk/.minikube/ca.pem, removing ... I0102 09:45:47.208195 2747 exec_runner.go:207] rm: /home/mnk/.minikube/ca.pem I0102 09:45:47.208242 2747 exec_runner.go:151] cp: /home/mnk/.minikube/certs/ca.pem --> /home/mnk/.minikube/ca.pem (1070 bytes) I0102 09:45:47.208323 2747 exec_runner.go:144] found /home/mnk/.minikube/cert.pem, removing ... I0102 09:45:47.208326 2747 exec_runner.go:207] rm: /home/mnk/.minikube/cert.pem I0102 09:45:47.208350 2747 exec_runner.go:151] cp: /home/mnk/.minikube/certs/cert.pem --> /home/mnk/.minikube/cert.pem (1111 bytes) I0102 09:45:47.208413 2747 exec_runner.go:144] found /home/mnk/.minikube/key.pem, removing ... I0102 09:45:47.208416 2747 exec_runner.go:207] rm: /home/mnk/.minikube/key.pem I0102 09:45:47.208434 2747 exec_runner.go:151] cp: /home/mnk/.minikube/certs/key.pem --> /home/mnk/.minikube/key.pem (1675 bytes) I0102 09:45:47.208600 2747 provision.go:112] generating server cert: /home/mnk/.minikube/machines/server.pem ca-key=/home/mnk/.minikube/certs/ca.pem private-key=/home/mnk/.minikube/certs/ca-key.pem org=mnk.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0102 09:45:47.470950 2747 provision.go:172] copyRemoteCerts I0102 09:45:47.471070 2747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0102 09:45:47.471117 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:47.489631 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:45:47.571437 2747 ssh_runner.go:362] scp /home/mnk/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0102 09:45:47.590758 2747 ssh_runner.go:362] scp /home/mnk/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I0102 09:45:47.608679 2747 ssh_runner.go:362] scp /home/mnk/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0102 09:45:47.629241 2747 provision.go:86] duration metric: configureAuth took 442.442736ms I0102 09:45:47.629266 2747 ubuntu.go:193] setting minikube options for container-runtime I0102 09:45:47.629420 2747 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0102 09:45:47.629463 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:47.651564 2747 main.go:134] libmachine: Using SSH client type: native I0102 09:45:47.651675 2747 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7ed4e0] 0x7f0660 [] 0s} 127.0.0.1 49157 } I0102 09:45:47.651682 2747 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0102 09:45:47.762010 2747 main.go:134] libmachine: SSH cmd err, output: : overlay I0102 09:45:47.762019 2747 ubuntu.go:71] root file system type: overlay I0102 09:45:47.762150 2747 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0102 09:45:47.762187 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:47.792062 2747 main.go:134] libmachine: Using SSH client type: native I0102 09:45:47.792175 2747 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7ed4e0] 0x7f0660 [] 0s} 127.0.0.1 49157 } I0102 09:45:47.792225 2747 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0102 09:45:47.924000 2747 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0102 09:45:47.924050 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:47.959790 2747 main.go:134] libmachine: Using SSH client type: native I0102 09:45:47.959915 2747 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7ed4e0] 0x7f0660 [] 0s} 127.0.0.1 49157 } I0102 09:45:47.959927 2747 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0102 09:45:48.893253 2747 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-01-02 14:45:47.921235932 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0102 09:45:48.893265 2747 machine.go:91] provisioned docker machine in 2.019983352s I0102 09:45:48.893271 2747 client.go:171] LocalClient.Create took 5.219649728s I0102 09:45:48.893280 2747 start.go:167] duration metric: libmachine.API.Create for "minikube" took 5.219684717s I0102 09:45:48.893288 2747 start.go:300] post-start starting for "minikube" (driver="docker") I0102 09:45:48.893292 2747 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0102 09:45:48.893390 2747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0102 09:45:48.893421 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:48.916568 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:45:48.999394 2747 ssh_runner.go:195] Run: cat /etc/os-release I0102 09:45:49.001958 2747 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0102 09:45:49.001969 2747 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0102 09:45:49.001977 2747 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0102 09:45:49.001986 2747 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0102 09:45:49.002004 2747 filesync.go:126] Scanning /home/mnk/.minikube/addons for local assets ... I0102 09:45:49.002188 2747 filesync.go:126] Scanning /home/mnk/.minikube/files for local assets ... I0102 09:45:49.002328 2747 start.go:303] post-start completed in 109.032663ms I0102 09:45:49.002574 2747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0102 09:45:49.022441 2747 profile.go:148] Saving config to /home/mnk/.minikube/profiles/minikube/config.json ... I0102 09:45:49.022800 2747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0102 09:45:49.022828 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:49.056140 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:45:49.145180 2747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0102 09:45:49.153035 2747 start.go:128] duration metric: createHost completed in 5.490933588s I0102 09:45:49.153108 2747 start.go:83] releasing machines lock for "minikube", held for 5.491244034s I0102 09:45:49.153197 2747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0102 09:45:49.179942 2747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0102 09:45:49.180006 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:49.179943 2747 ssh_runner.go:195] Run: systemctl --version I0102 09:45:49.180037 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:45:49.199116 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:45:49.201439 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:45:49.293577 2747 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0102 09:45:49.436028 2747 cruntime.go:273] skipping containerd shutdown because we are bound to it I0102 09:45:49.436120 2747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0102 09:45:49.456090 2747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0102 09:45:49.471553 2747 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0102 09:45:49.545973 2747 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0102 09:45:49.618106 2747 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0102 09:45:49.680622 2747 ssh_runner.go:195] Run: sudo systemctl restart docker I0102 09:45:50.176978 2747 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0102 09:45:50.252767 2747 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0102 09:45:50.323793 2747 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0102 09:45:50.361087 2747 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock I0102 09:45:50.361161 2747 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0102 09:45:50.364667 2747 start.go:472] Will wait 60s for crictl version I0102 09:45:50.364689 2747 ssh_runner.go:195] Run: sudo crictl version I0102 09:45:50.520054 2747 start.go:481] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.20 RuntimeApiVersion: 1.41.0 I0102 09:45:50.520092 2747 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0102 09:45:50.580096 2747 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0102 09:45:50.607938 2747 out.go:204] ๐Ÿณ Preparing Kubernetes v1.25.3 on Docker 20.10.20 ... I0102 09:45:50.608068 2747 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0102 09:45:50.625012 2747 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0102 09:45:50.628161 2747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0102 09:45:50.637529 2747 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0102 09:45:50.637574 2747 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0102 09:45:50.658253 2747 docker.go:613] Got preloaded images: I0102 09:45:50.658262 2747 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded I0102 09:45:50.658290 2747 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0102 09:45:50.664827 2747 ssh_runner.go:195] Run: which lz4 I0102 09:45:50.667410 2747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0102 09:45:50.670453 2747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot stat '/preloaded.tar.lz4': No such file or directory I0102 09:45:50.670471 2747 ssh_runner.go:362] scp /home/mnk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes) I0102 09:45:51.413672 2747 docker.go:577] Took 0.746316 seconds to copy over tarball I0102 09:45:51.413709 2747 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0102 09:45:53.472476 2747 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.058750265s) I0102 09:45:53.472488 2747 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0102 09:45:53.590895 2747 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0102 09:45:53.597272 2747 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes) I0102 09:45:53.609481 2747 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0102 09:45:53.677570 2747 ssh_runner.go:195] Run: sudo systemctl restart docker I0102 09:45:54.625454 2747 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0102 09:45:54.644528 2747 docker.go:613] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0102 09:45:54.644542 2747 cache_images.go:84] Images are preloaded, skipping loading I0102 09:45:54.644578 2747 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0102 09:45:54.780281 2747 cni.go:95] Creating CNI manager for "" I0102 09:45:54.780296 2747 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0102 09:45:54.780310 2747 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0102 09:45:54.780332 2747 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false} I0102 09:45:54.780430 2747 kubeadm.go:161] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.25.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0102 09:45:54.780500 2747 kubeadm.go:962] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0102 09:45:54.780534 2747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3 I0102 09:45:54.787940 2747 binaries.go:44] Found k8s binaries, skipping transfer I0102 09:45:54.787973 2747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0102 09:45:54.794696 2747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0102 09:45:54.807010 2747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0102 09:45:54.819345 2747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0102 09:45:54.831246 2747 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0102 09:45:54.833682 2747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0102 09:45:54.842229 2747 certs.go:54] Setting up /home/mnk/.minikube/profiles/minikube for IP: 192.168.49.2 I0102 09:45:54.842461 2747 certs.go:182] skipping minikubeCA CA generation: /home/mnk/.minikube/ca.key I0102 09:45:54.842597 2747 certs.go:182] skipping proxyClientCA CA generation: /home/mnk/.minikube/proxy-client-ca.key I0102 09:45:54.842660 2747 certs.go:302] generating minikube-user signed cert: /home/mnk/.minikube/profiles/minikube/client.key I0102 09:45:54.842671 2747 crypto.go:68] Generating cert /home/mnk/.minikube/profiles/minikube/client.crt with IP's: [] I0102 09:45:54.929048 2747 crypto.go:156] Writing cert to /home/mnk/.minikube/profiles/minikube/client.crt ... I0102 09:45:54.929065 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/client.crt: {Name:mke69e10567c765ac4ce5ea2e3a923b6dbc8a1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:54.929252 2747 crypto.go:164] Writing key to /home/mnk/.minikube/profiles/minikube/client.key ... I0102 09:45:54.929257 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/client.key: {Name:mke7d84c331be8611670276eb3536c5514c5aae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:54.929342 2747 certs.go:302] generating minikube signed cert: /home/mnk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0102 09:45:54.929350 2747 crypto.go:68] Generating cert /home/mnk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0102 09:45:55.001671 2747 crypto.go:156] Writing cert to /home/mnk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0102 09:45:55.001682 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk238e041c2b854c9a69cbf48c86dbad1b76e406 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:55.001842 2747 crypto.go:164] Writing key to /home/mnk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0102 09:45:55.001847 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk66aa8d9441b7b66003fcd0405d642b7c82eda3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:55.001966 2747 certs.go:320] copying /home/mnk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/mnk/.minikube/profiles/minikube/apiserver.crt I0102 09:45:55.002019 2747 certs.go:324] copying /home/mnk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/mnk/.minikube/profiles/minikube/apiserver.key I0102 09:45:55.002062 2747 certs.go:302] generating aggregator signed cert: /home/mnk/.minikube/profiles/minikube/proxy-client.key I0102 09:45:55.002070 2747 crypto.go:68] Generating cert /home/mnk/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0102 09:45:55.103674 2747 crypto.go:156] Writing cert to /home/mnk/.minikube/profiles/minikube/proxy-client.crt ... I0102 09:45:55.103684 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/proxy-client.crt: {Name:mk6bfe247619dca3cc6dad89f27b95e666d274f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:55.103854 2747 crypto.go:164] Writing key to /home/mnk/.minikube/profiles/minikube/proxy-client.key ... I0102 09:45:55.103875 2747 lock.go:35] WriteFile acquiring /home/mnk/.minikube/profiles/minikube/proxy-client.key: {Name:mk4d7c799ccdcc0daabbfe90cca9ff7e441b7a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:45:55.104032 2747 certs.go:388] found cert: /home/mnk/.minikube/certs/home/mnk/.minikube/certs/ca-key.pem (1675 bytes) I0102 09:45:55.104060 2747 certs.go:388] found cert: /home/mnk/.minikube/certs/home/mnk/.minikube/certs/ca.pem (1070 bytes) I0102 09:45:55.104076 2747 certs.go:388] found cert: /home/mnk/.minikube/certs/home/mnk/.minikube/certs/cert.pem (1111 bytes) I0102 09:45:55.104092 2747 certs.go:388] found cert: /home/mnk/.minikube/certs/home/mnk/.minikube/certs/key.pem (1675 bytes) I0102 09:45:55.104514 2747 ssh_runner.go:362] scp /home/mnk/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0102 09:45:55.122009 2747 ssh_runner.go:362] scp /home/mnk/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0102 09:45:55.138154 2747 ssh_runner.go:362] scp /home/mnk/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0102 09:45:55.154421 2747 ssh_runner.go:362] scp /home/mnk/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0102 09:45:55.175202 2747 ssh_runner.go:362] scp /home/mnk/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0102 09:45:55.195418 2747 ssh_runner.go:362] scp /home/mnk/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0102 09:45:55.211955 2747 ssh_runner.go:362] scp /home/mnk/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0102 09:45:55.228542 2747 ssh_runner.go:362] scp /home/mnk/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0102 09:45:55.244148 2747 ssh_runner.go:362] scp /home/mnk/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0102 09:45:55.260807 2747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0102 09:45:55.277122 2747 ssh_runner.go:195] Run: openssl version I0102 09:45:55.282574 2747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0102 09:45:55.291234 2747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0102 09:45:55.294200 2747 certs.go:431] hashing: -rw-r--r--. 1 root root 1111 Dec 9 21:20 /usr/share/ca-certificates/minikubeCA.pem I0102 09:45:55.294226 2747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0102 09:45:55.298430 2747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0102 09:45:55.305504 2747 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mnk:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0102 09:45:55.305594 2747 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0102 09:45:55.324931 2747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0102 09:45:55.332226 2747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0102 09:45:55.340101 2747 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0102 09:45:55.340149 2747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0102 09:45:55.346903 2747 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0102 09:45:55.346925 2747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0102 09:45:55.393225 2747 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3 I0102 09:45:55.393261 2747 kubeadm.go:317] [preflight] Running pre-flight checks I0102 09:45:55.492791 2747 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster I0102 09:45:55.492896 2747 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection I0102 09:45:55.492983 2747 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0102 09:45:55.632120 2747 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0102 09:45:55.637003 2747 out.go:204] โ–ช Generating certificates and keys ... I0102 09:45:55.637127 2747 kubeadm.go:317] [certs] Using existing ca certificate authority I0102 09:45:55.637210 2747 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk I0102 09:45:55.859391 2747 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key I0102 09:45:56.046186 2747 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key I0102 09:45:56.093654 2747 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key I0102 09:45:56.293101 2747 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key I0102 09:45:56.355766 2747 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key I0102 09:45:56.355972 2747 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0102 09:45:56.647453 2747 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key I0102 09:45:56.647557 2747 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0102 09:45:56.780141 2747 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key I0102 09:45:56.905564 2747 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key I0102 09:45:57.112872 2747 kubeadm.go:317] [certs] Generating "sa" key and public key I0102 09:45:57.113047 2747 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0102 09:45:57.214589 2747 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file I0102 09:45:57.463339 2747 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0102 09:45:57.597716 2747 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0102 09:45:57.773825 2747 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0102 09:45:57.784380 2747 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0102 09:45:57.785061 2747 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0102 09:45:57.785185 2747 kubeadm.go:317] [kubelet-start] Starting the kubelet I0102 09:45:57.868116 2747 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0102 09:45:57.881450 2747 out.go:204] โ–ช Booting up control plane ... I0102 09:45:57.881636 2747 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver" I0102 09:45:57.881723 2747 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0102 09:45:57.881851 2747 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler" I0102 09:45:57.881955 2747 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0102 09:45:57.882075 2747 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0102 09:46:06.876016 2747 kubeadm.go:317] [apiclient] All control plane components are healthy after 9.001761 seconds I0102 09:46:06.876121 2747 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0102 09:46:06.897070 2747 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0102 09:46:07.409811 2747 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs I0102 09:46:07.410030 2747 kubeadm.go:317] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0102 09:46:07.926105 2747 kubeadm.go:317] [bootstrap-token] Using token: 61k32i.qbylr9w1zvojfp64 I0102 09:46:07.943397 2747 out.go:204] โ–ช Configuring RBAC rules ... I0102 09:46:07.944107 2747 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0102 09:46:07.968430 2747 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0102 09:46:07.981703 2747 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0102 09:46:07.985329 2747 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0102 09:46:07.988001 2747 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0102 09:46:07.989655 2747 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0102 09:46:08.003084 2747 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0102 09:46:08.216934 2747 kubeadm.go:317] [addons] Applied essential addon: CoreDNS I0102 09:46:08.370844 2747 kubeadm.go:317] [addons] Applied essential addon: kube-proxy I0102 09:46:08.372783 2747 kubeadm.go:317] I0102 09:46:08.372834 2747 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully! I0102 09:46:08.372841 2747 kubeadm.go:317] I0102 09:46:08.372934 2747 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user: I0102 09:46:08.372938 2747 kubeadm.go:317] I0102 09:46:08.372957 2747 kubeadm.go:317] mkdir -p $HOME/.kube I0102 09:46:08.373041 2747 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0102 09:46:08.373099 2747 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0102 09:46:08.373102 2747 kubeadm.go:317] I0102 09:46:08.373142 2747 kubeadm.go:317] Alternatively, if you are the root user, you can run: I0102 09:46:08.373145 2747 kubeadm.go:317] I0102 09:46:08.373180 2747 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf I0102 09:46:08.373182 2747 kubeadm.go:317] I0102 09:46:08.373254 2747 kubeadm.go:317] You should now deploy a pod network to the cluster. I0102 09:46:08.373334 2747 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0102 09:46:08.373423 2747 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0102 09:46:08.373432 2747 kubeadm.go:317] I0102 09:46:08.373514 2747 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities I0102 09:46:08.373619 2747 kubeadm.go:317] and service account keys on each node and then running the following as root: I0102 09:46:08.373624 2747 kubeadm.go:317] I0102 09:46:08.373714 2747 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 61k32i.qbylr9w1zvojfp64 \ I0102 09:46:08.373822 2747 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:e046879ccda032f583af12a9a7d7aea6e36cc88af46dda1ae8e783254932f3f0 \ I0102 09:46:08.373844 2747 kubeadm.go:317] --control-plane I0102 09:46:08.373868 2747 kubeadm.go:317] I0102 09:46:08.373958 2747 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root: I0102 09:46:08.373962 2747 kubeadm.go:317] I0102 09:46:08.374047 2747 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 61k32i.qbylr9w1zvojfp64 \ I0102 09:46:08.374155 2747 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:e046879ccda032f583af12a9a7d7aea6e36cc88af46dda1ae8e783254932f3f0 I0102 09:46:08.376409 2747 kubeadm.go:317] W0102 14:45:55.385587 1178 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0102 09:46:08.376532 2747 kubeadm.go:317] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0102 09:46:08.376581 2747 kubeadm.go:317] [WARNING SystemVerification]: missing optional cgroups: blkio I0102 09:46:08.376663 2747 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0102 09:46:08.376687 2747 cni.go:95] Creating CNI manager for "" I0102 09:46:08.376711 2747 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0102 09:46:08.376733 2747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0102 09:46:08.376780 2747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0102 09:46:08.376786 2747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_01_02T09_46_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0102 09:46:08.384232 2747 ops.go:34] apiserver oom_adj: -16 I0102 09:46:08.473596 2747 kubeadm.go:1067] duration metric: took 96.847668ms to wait for elevateKubeSystemPrivileges. I0102 09:46:08.473632 2747 kubeadm.go:398] StartCluster complete in 13.168135714s I0102 09:46:08.473648 2747 settings.go:142] acquiring lock: {Name:mk237eca7f3917811ab71b419edce7f9f2435b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:46:08.473730 2747 settings.go:150] Updating kubeconfig: /home/mnk/.kube/config I0102 09:46:08.474264 2747 lock.go:35] WriteFile acquiring /home/mnk/.kube/config: {Name:mkd40a1ae7cbef037f6031f1518f5a870918d355 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0102 09:46:08.990822 2747 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0102 09:46:08.990898 2747 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0102 09:46:08.995773 2747 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0102 09:46:08.991004 2747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0102 09:46:08.991237 2747 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0102 09:46:08.991319 2747 addons.go:486] enableAddons start: toEnable=map[], additional=[] I0102 09:46:09.000771 2747 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0102 09:46:09.000797 2747 addons.go:227] Setting addon storage-provisioner=true in "minikube" W0102 09:46:09.000802 2747 addons.go:236] addon storage-provisioner should already be in state true I0102 09:46:09.000847 2747 host.go:66] Checking if "minikube" exists ... I0102 09:46:09.000897 2747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0102 09:46:09.000957 2747 addons.go:65] Setting default-storageclass=true in profile "minikube" I0102 09:46:09.000971 2747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0102 09:46:09.001361 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:46:09.001485 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:46:09.033419 2747 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0102 09:46:09.034620 2747 addons.go:227] Setting addon default-storageclass=true in "minikube" W0102 09:46:09.039698 2747 addons.go:236] addon default-storageclass should already be in state true I0102 09:46:09.039710 2747 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml I0102 09:46:09.039717 2747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0102 09:46:09.039724 2747 host.go:66] Checking if "minikube" exists ... I0102 09:46:09.039758 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:46:09.040894 2747 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0102 09:46:09.070821 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:46:09.072178 2747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0102 09:46:09.072507 2747 api_server.go:51] waiting for apiserver process to appear ... I0102 09:46:09.072529 2747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0102 09:46:09.073565 2747 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml I0102 09:46:09.073573 2747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0102 09:46:09.073611 2747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0102 09:46:09.096421 2747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/mnk/.minikube/machines/minikube/id_rsa Username:docker} I0102 09:46:09.163974 2747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0102 09:46:09.184969 2747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0102 09:46:09.624739 2747 api_server.go:71] duration metric: took 633.819044ms to wait for apiserver process to appear ... I0102 09:46:09.624754 2747 api_server.go:87] waiting for apiserver healthz status ... I0102 09:46:09.624765 2747 api_server.go:252] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0102 09:46:09.624756 2747 start.go:826] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0102 09:46:09.635711 2747 api_server.go:278] https://192.168.49.2:8443/healthz returned 200: ok I0102 09:46:09.636401 2747 api_server.go:140] control plane version: v1.25.3 I0102 09:46:09.636408 2747 api_server.go:130] duration metric: took 11.650436ms to wait for apiserver health ... I0102 09:46:09.636416 2747 system_pods.go:43] waiting for kube-system pods to appear ... I0102 09:46:09.640157 2747 system_pods.go:59] 4 kube-system pods found I0102 09:46:09.640169 2747 system_pods.go:61] "etcd-minikube" [9c828d20-539c-4193-b2a7-eedf7afcca49] Pending I0102 09:46:09.640172 2747 system_pods.go:61] "kube-apiserver-minikube" [bbb90304-39a8-4201-ad72-3fa36bb4bfdd] Pending I0102 09:46:09.640175 2747 system_pods.go:61] "kube-controller-manager-minikube" [db0e1982-dfdb-43c8-96b7-b5253bf24abb] Pending I0102 09:46:09.640178 2747 system_pods.go:61] "kube-scheduler-minikube" [78d3529c-c4c3-4cb6-b18e-d52d8ac071f8] Pending I0102 09:46:09.640181 2747 system_pods.go:74] duration metric: took 3.762164ms to wait for pod list to return data ... I0102 09:46:09.640188 2747 kubeadm.go:573] duration metric: took 649.270591ms to wait for : map[apiserver:true system_pods:true] ... I0102 09:46:09.640198 2747 node_conditions.go:102] verifying NodePressure condition ... I0102 09:46:09.642029 2747 node_conditions.go:122] node storage ephemeral capacity is 39937312Ki I0102 09:46:09.642043 2747 node_conditions.go:123] node cpu capacity is 8 I0102 09:46:09.642055 2747 node_conditions.go:105] duration metric: took 1.853108ms to run NodePressure ... I0102 09:46:09.642064 2747 start.go:217] waiting for startup goroutines ... I0102 09:46:09.681208 2747 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0102 09:46:09.685449 2747 addons.go:488] enableAddons completed in 694.153622ms I0102 09:46:09.685653 2747 ssh_runner.go:195] Run: rm -f paused I0102 09:46:09.770874 2747 start.go:506] kubectl: 1.25.4, cluster: 1.25.3 (minor skew: 0) I0102 09:46:09.775283 2747 out.go:177] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Mon 2023-01-02 14:45:46 UTC, end at Mon 2023-01-02 15:03:09 UTC. -- Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.507387405Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.516352690Z" level=info msg="Loading containers: start." Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.794287462Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.848625222Z" level=info msg="Loading containers: done." Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.874832868Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20 Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.874903489Z" level=info msg="Daemon has completed initialization" Jan 02 14:45:48 minikube systemd[1]: Started Docker Application Container Engine. Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.897847238Z" level=info msg="API listen on [::]:2376" Jan 02 14:45:48 minikube dockerd[384]: time="2023-01-02T14:45:48.903811496Z" level=info msg="API listen on /var/run/docker.sock" Jan 02 14:45:49 minikube systemd[1]: Stopping Docker Application Container Engine... Jan 02 14:45:49 minikube dockerd[384]: time="2023-01-02T14:45:49.688447305Z" level=info msg="Processing signal 'terminated'" Jan 02 14:45:49 minikube dockerd[384]: time="2023-01-02T14:45:49.689346500Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jan 02 14:45:49 minikube dockerd[384]: time="2023-01-02T14:45:49.690047909Z" level=info msg="Daemon shutdown complete" Jan 02 14:45:49 minikube systemd[1]: docker.service: Succeeded. Jan 02 14:45:49 minikube systemd[1]: Stopped Docker Application Container Engine. Jan 02 14:45:49 minikube systemd[1]: Starting Docker Application Container Engine... Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.762891092Z" level=info msg="Starting up" Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.764370936Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.764390021Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.764405733Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.764415989Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.765326829Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.765349575Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.765365148Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.765376143Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.782138755Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jan 02 14:45:49 minikube dockerd[590]: time="2023-01-02T14:45:49.791601914Z" level=info msg="Loading containers: start." Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.093087723Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.140462018Z" level=info msg="Loading containers: done." Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.160372916Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20 Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.160491554Z" level=info msg="Daemon has completed initialization" Jan 02 14:45:50 minikube systemd[1]: Started Docker Application Container Engine. Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.179148579Z" level=info msg="API listen on [::]:2376" Jan 02 14:45:50 minikube dockerd[590]: time="2023-01-02T14:45:50.184621947Z" level=info msg="API listen on /var/run/docker.sock" Jan 02 14:45:53 minikube systemd[1]: Stopping Docker Application Container Engine... Jan 02 14:45:53 minikube dockerd[590]: time="2023-01-02T14:45:53.685687202Z" level=info msg="Processing signal 'terminated'" Jan 02 14:45:53 minikube dockerd[590]: time="2023-01-02T14:45:53.686577602Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jan 02 14:45:53 minikube dockerd[590]: time="2023-01-02T14:45:53.687091824Z" level=info msg="Daemon shutdown complete" Jan 02 14:45:53 minikube systemd[1]: docker.service: Succeeded. Jan 02 14:45:53 minikube systemd[1]: Stopped Docker Application Container Engine. Jan 02 14:45:53 minikube systemd[1]: Starting Docker Application Container Engine... Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.739316655Z" level=info msg="Starting up" Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.740802608Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.740826895Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.740851434Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.740880431Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.741960391Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.742031906Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.742086664Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jan 02 14:45:53 minikube dockerd[900]: time="2023-01-02T14:45:53.742133292Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.302652005Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.315606596Z" level=info msg="Loading containers: start." Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.544040538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.591908491Z" level=info msg="Loading containers: done." Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.611539995Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20 Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.611665246Z" level=info msg="Daemon has completed initialization" Jan 02 14:45:54 minikube systemd[1]: Started Docker Application Container Engine. Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.631326408Z" level=info msg="API listen on [::]:2376" Jan 02 14:45:54 minikube dockerd[900]: time="2023-01-02T14:45:54.634296988Z" level=info msg="API listen on /var/run/docker.sock" Jan 02 14:46:53 minikube dockerd[900]: time="2023-01-02T14:46:53.397513632Z" level=info msg="ignoring event" container=dcd6c0b6867c9dc5a99519a91c7a41151b2bc38605b332d84c3a94924af019ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 641d667daf08b 6e38f40d628db 16 minutes ago Running storage-provisioner 1 5329716fbf851 dcd6c0b6867c9 6e38f40d628db 16 minutes ago Exited storage-provisioner 0 5329716fbf851 4d555d57e070e beaaf00edd38a 16 minutes ago Running kube-proxy 0 8dd328a9188eb 1f063d52942b2 5185b96f0becf 16 minutes ago Running coredns 0 2fb5ce51b27b4 6a53cb48b762e 0346dbd74bcb9 17 minutes ago Running kube-apiserver 0 bb8b38e8bf731 823adf2a91489 a8a176a5d5d69 17 minutes ago Running etcd 0 818c02eaa256b 823b5e716a97c 6039992312758 17 minutes ago Running kube-controller-manager 0 ee0e1a12d5933 84e9e2f6a9cbf 6d23ec0e8b87e 17 minutes ago Running kube-scheduler 0 80a7736af62a9 * * ==> coredns [1f063d52942b] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = eff20e86b4fd2b9878e9c34205d7ba141ff41613cbdadb71e63d4a8be6caff7d1fbccef3edfe618baf8958049a58d98ae28ea781e3e7cdf1cc90820da8e01a6d CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_01_02T09_46_08_0700 minikube.k8s.io/version=v1.28.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 02 Jan 2023 14:46:05 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 02 Jan 2023 15:03:00 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 02 Jan 2023 15:01:26 +0000 Mon, 02 Jan 2023 14:46:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 02 Jan 2023 15:01:26 +0000 Mon, 02 Jan 2023 14:46:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 02 Jan 2023 15:01:26 +0000 Mon, 02 Jan 2023 14:46:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 02 Jan 2023 15:01:26 +0000 Mon, 02 Jan 2023 14:46:08 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 8 ephemeral-storage: 39937312Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 20482400Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 39937312Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 20482400Ki pods: 110 System Info: Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9 System UUID: eb835f37-7b7d-4799-9ff5-9f153697f354 Boot ID: 108e3e59-56bc-487b-b496-5a1431aebc3a Kernel Version: 6.0.15-300.fc37.x86_64 OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.20 Kubelet Version: v1.25.3 Kube-Proxy Version: v1.25.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-565d847f94-j64r9 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 16m kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 17m kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17m kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17m kube-system kube-proxy-df49z 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 16m kube-proxy Normal Starting 17m kubelet Starting kubelet. Normal NodeAllocatableEnforced 17m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 17m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 17m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 17m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeReady 17m kubelet Node minikube status is now: NodeReady Normal RegisteredNode 16m node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [Jan 2 14:44] #2 [ +0.001005] #3 [ +0.000995] #4 [ +0.000991] #5 [ +0.001028] #6 [ +0.000979] #7 [ +0.305427] sgx: There are zero EPC sections. [ +0.390702] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ +1.269453] virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called [ +1.680845] systemd-gpt-auto-generator[670]: Failed to dissect: Permission denied [ +0.001613] systemd-sysv-generator[676]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000051] systemd-sysv-generator[676]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.010019] systemd[654]: /usr/lib/systemd/system-generators/systemd-gpt-auto-generator failed with exit status 1. * * ==> etcd [823adf2a9148] <== * {"level":"info","ts":"2023-01-02T14:46:03.115Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-01-02T14:46:03.116Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-01-02T14:46:03.116Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-01-02T14:46:03.117Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2023-01-02T14:46:03.117Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-02T14:46:03.132Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"14.363871ms"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2023-01-02T14:46:03.143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2023-01-02T14:46:03.150Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-02T14:46:03.155Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-02T14:46:03.160Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-02T14:46:03.164Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.4","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-02T14:46:03.165Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-02T14:46:03.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2023-01-02T14:46:03.166Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-01-02T14:46:03.166Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-01-02T14:46:03.166Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-01-02T14:46:03.167Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-01-02T14:46:03.167Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-01-02T14:46:03.167Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2023-01-02T14:46:03.543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2023-01-02T14:46:03.544Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-02T14:46:03.555Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2023-01-02T14:46:03.555Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-02T14:46:03.555Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-02T14:46:03.555Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2023-01-02T14:46:03.555Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-02T14:46:03.556Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2023-01-02T14:46:03.556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-02T14:46:03.556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-01-02T14:46:03.556Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-02T14:46:03.558Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2023-01-02T14:56:03.722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":563} {"level":"info","ts":"2023-01-02T14:56:03.723Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":563,"took":"515.997ยตs"} {"level":"info","ts":"2023-01-02T15:01:03.743Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":774} {"level":"info","ts":"2023-01-02T15:01:03.745Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":774,"took":"1.5646ms"} * * ==> kernel <== * 15:03:09 up 18 min, 0 users, load average: 1.45, 0.82, 0.48 Linux minikube 6.0.15-300.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Dec 21 18:33:23 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kube-apiserver [6a53cb48b762] <== * W0102 14:46:04.252473 1 genericapiserver.go:656] Skipping API apps/v1beta1 because it has no resources. W0102 14:46:04.254167 1 genericapiserver.go:656] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. W0102 14:46:04.255459 1 genericapiserver.go:656] Skipping API events.k8s.io/v1beta1 because it has no resources. I0102 14:46:04.256140 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0102 14:46:04.256152 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0102 14:46:04.271770 1 genericapiserver.go:656] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0102 14:46:05.284318 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0102 14:46:05.284454 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0102 14:46:05.284457 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0102 14:46:05.284987 1 secure_serving.go:210] Serving securely on [::]:8443 I0102 14:46:05.285070 1 autoregister_controller.go:141] Starting autoregister controller I0102 14:46:05.285084 1 cache.go:32] Waiting for caches to sync for autoregister controller I0102 14:46:05.285244 1 controller.go:83] Starting OpenAPI AggregationController I0102 14:46:05.285669 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0102 14:46:05.285681 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller I0102 14:46:05.285713 1 apf_controller.go:300] Starting API Priority and Fairness config controller I0102 14:46:05.285741 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0102 14:46:05.285853 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0102 14:46:05.285854 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0102 14:46:05.285885 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0102 14:46:05.285890 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0102 14:46:05.285947 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0102 14:46:05.285714 1 controller.go:80] Starting OpenAPI V3 AggregationController I0102 14:46:05.286230 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0102 14:46:05.288341 1 available_controller.go:491] Starting AvailableConditionController I0102 14:46:05.288355 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0102 14:46:05.288377 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0102 14:46:05.288383 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0102 14:46:05.305569 1 controller.go:85] Starting OpenAPI controller I0102 14:46:05.305611 1 controller.go:85] Starting OpenAPI V3 controller I0102 14:46:05.305636 1 naming_controller.go:291] Starting NamingConditionController I0102 14:46:05.305656 1 establishing_controller.go:76] Starting EstablishingController I0102 14:46:05.305673 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0102 14:46:05.305691 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0102 14:46:05.305708 1 crd_finalizer.go:266] Starting CRDFinalizer I0102 14:46:05.327383 1 controller.go:616] quota admission added evaluator for: namespaces I0102 14:46:05.385162 1 cache.go:39] Caches are synced for autoregister controller I0102 14:46:05.385774 1 apf_controller.go:305] Running API Priority and Fairness config worker I0102 14:46:05.385783 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0102 14:46:05.385914 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0102 14:46:05.389348 1 shared_informer.go:262] Caches are synced for crd-autoregister I0102 14:46:05.389390 1 cache.go:39] Caches are synced for AvailableConditionController controller I0102 14:46:05.392102 1 shared_informer.go:262] Caches are synced for node_authorizer I0102 14:46:06.123582 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0102 14:46:06.291770 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0102 14:46:06.305389 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0102 14:46:06.305414 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0102 14:46:06.805296 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0102 14:46:06.842263 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0102 14:46:06.946371 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0102 14:46:06.954734 1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0102 14:46:06.955459 1 controller.go:616] quota admission added evaluator for: endpoints I0102 14:46:06.959363 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io I0102 14:46:07.337419 1 controller.go:616] quota admission added evaluator for: serviceaccounts I0102 14:46:08.208166 1 controller.go:616] quota admission added evaluator for: deployments.apps I0102 14:46:08.216042 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0102 14:46:08.221211 1 controller.go:616] quota admission added evaluator for: daemonsets.apps I0102 14:46:08.282096 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io I0102 14:46:21.290387 1 controller.go:616] quota admission added evaluator for: replicasets.apps I0102 14:46:21.787349 1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps * * ==> kube-controller-manager [823b5e716a97] <== * I0102 14:46:20.878185 1 controllermanager.go:603] Started "ttl" I0102 14:46:20.878298 1 ttl_controller.go:120] Starting TTL controller I0102 14:46:20.878316 1 shared_informer.go:255] Waiting for caches to sync for TTL I0102 14:46:21.027570 1 controllermanager.go:603] Started "replicationcontroller" I0102 14:46:21.028057 1 replica_set.go:205] Starting replicationcontroller controller I0102 14:46:21.028096 1 shared_informer.go:255] Waiting for caches to sync for ReplicationController I0102 14:46:21.051716 1 shared_informer.go:255] Waiting for caches to sync for resource quota W0102 14:46:21.052103 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0102 14:46:21.058408 1 shared_informer.go:262] Caches are synced for HPA I0102 14:46:21.072267 1 shared_informer.go:262] Caches are synced for GC I0102 14:46:21.074101 1 shared_informer.go:262] Caches are synced for PV protection I0102 14:46:21.075727 1 shared_informer.go:255] Waiting for caches to sync for garbage collector I0102 14:46:21.078276 1 shared_informer.go:262] Caches are synced for ephemeral I0102 14:46:21.078408 1 shared_informer.go:262] Caches are synced for TTL I0102 14:46:21.079433 1 shared_informer.go:262] Caches are synced for stateful set I0102 14:46:21.079446 1 shared_informer.go:262] Caches are synced for TTL after finished I0102 14:46:21.079470 1 shared_informer.go:262] Caches are synced for persistent volume I0102 14:46:21.079559 1 shared_informer.go:262] Caches are synced for deployment I0102 14:46:21.079586 1 shared_informer.go:262] Caches are synced for attach detach I0102 14:46:21.079669 1 shared_informer.go:262] Caches are synced for cronjob I0102 14:46:21.079741 1 shared_informer.go:262] Caches are synced for node I0102 14:46:21.079760 1 range_allocator.go:166] Starting range CIDR allocator I0102 14:46:21.079765 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator I0102 14:46:21.079776 1 shared_informer.go:262] Caches are synced for cidrallocator I0102 14:46:21.083400 1 range_allocator.go:367] Set node minikube PodCIDR to [10.244.0.0/24] I0102 14:46:21.089469 1 shared_informer.go:262] Caches are synced for expand I0102 14:46:21.094304 1 shared_informer.go:262] Caches are synced for job I0102 14:46:21.098660 1 shared_informer.go:262] Caches are synced for PVC protection I0102 14:46:21.101109 1 shared_informer.go:262] Caches are synced for taint I0102 14:46:21.101178 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: W0102 14:46:21.101226 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp. I0102 14:46:21.101251 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I0102 14:46:21.101227 1 taint_manager.go:204] "Starting NoExecuteTaintManager" I0102 14:46:21.101323 1 taint_manager.go:209] "Sending events to api server" I0102 14:46:21.101325 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0102 14:46:21.105528 1 shared_informer.go:262] Caches are synced for endpoint_slice I0102 14:46:21.123304 1 shared_informer.go:262] Caches are synced for daemon sets I0102 14:46:21.127561 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0102 14:46:21.127616 1 shared_informer.go:262] Caches are synced for certificate-csrapproving I0102 14:46:21.128651 1 shared_informer.go:262] Caches are synced for endpoint I0102 14:46:21.128681 1 shared_informer.go:262] Caches are synced for ReplicationController I0102 14:46:21.128785 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0102 14:46:21.128902 1 shared_informer.go:262] Caches are synced for bootstrap_signer I0102 14:46:21.128970 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0102 14:46:21.129158 1 shared_informer.go:262] Caches are synced for disruption I0102 14:46:21.129213 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client I0102 14:46:21.129348 1 shared_informer.go:262] Caches are synced for ReplicaSet I0102 14:46:21.130453 1 shared_informer.go:262] Caches are synced for crt configmap I0102 14:46:21.130502 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown I0102 14:46:21.181118 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0102 14:46:21.219887 1 shared_informer.go:262] Caches are synced for namespace I0102 14:46:21.228532 1 shared_informer.go:262] Caches are synced for service account I0102 14:46:21.252449 1 shared_informer.go:262] Caches are synced for resource quota I0102 14:46:21.306806 1 shared_informer.go:262] Caches are synced for resource quota I0102 14:46:21.320168 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 1" I0102 14:46:21.676085 1 shared_informer.go:262] Caches are synced for garbage collector I0102 14:46:21.727988 1 shared_informer.go:262] Caches are synced for garbage collector I0102 14:46:21.728042 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0102 14:46:21.801019 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-df49z" I0102 14:46:22.127051 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-j64r9" * * ==> kube-proxy [4d555d57e070] <== * I0102 14:46:23.211929 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0102 14:46:23.211994 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0102 14:46:23.212019 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0102 14:46:23.228055 1 server_others.go:206] "Using iptables Proxier" I0102 14:46:23.228079 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0102 14:46:23.228086 1 server_others.go:214] "Creating dualStackProxier for iptables" I0102 14:46:23.228108 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0102 14:46:23.228142 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0102 14:46:23.228299 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0102 14:46:23.228495 1 server.go:661] "Version info" version="v1.25.3" I0102 14:46:23.228511 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0102 14:46:23.229058 1 config.go:226] "Starting endpoint slice config controller" I0102 14:46:23.229077 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0102 14:46:23.229136 1 config.go:317] "Starting service config controller" I0102 14:46:23.229173 1 shared_informer.go:255] Waiting for caches to sync for service config I0102 14:46:23.229233 1 config.go:444] "Starting node config controller" I0102 14:46:23.229253 1 shared_informer.go:255] Waiting for caches to sync for node config I0102 14:46:23.330199 1 shared_informer.go:262] Caches are synced for service config I0102 14:46:23.330562 1 shared_informer.go:262] Caches are synced for node config I0102 14:46:23.330634 1 shared_informer.go:262] Caches are synced for endpoint slice config * * ==> kube-scheduler [84e9e2f6a9cb] <== * I0102 14:46:05.338893 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0102 14:46:05.339113 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0102 14:46:05.339209 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0102 14:46:05.340939 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0102 14:46:05.340987 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0102 14:46:05.341101 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0102 14:46:05.341115 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0102 14:46:05.341129 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0102 14:46:05.341136 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0102 14:46:05.341193 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0102 14:46:05.341253 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0102 14:46:05.341269 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0102 14:46:05.341255 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0102 14:46:05.341340 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0102 14:46:05.341367 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0102 14:46:05.341676 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0102 14:46:05.341715 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0102 14:46:05.342709 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0102 14:46:05.342763 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0102 14:46:05.342957 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0102 14:46:05.343019 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0102 14:46:05.343013 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0102 14:46:05.343111 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0102 14:46:05.343331 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0102 14:46:05.343387 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0102 14:46:05.343391 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0102 14:46:05.343449 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0102 14:46:05.343361 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0102 14:46:05.343497 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0102 14:46:05.343411 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0102 14:46:05.343512 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0102 14:46:05.343600 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0102 14:46:05.343643 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0102 14:46:06.181332 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0102 14:46:06.181332 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0102 14:46:06.181373 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0102 14:46:06.181386 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0102 14:46:06.257929 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0102 14:46:06.258141 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0102 14:46:06.264380 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0102 14:46:06.264425 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0102 14:46:06.337431 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0102 14:46:06.337465 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0102 14:46:06.346649 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0102 14:46:06.346711 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0102 14:46:06.473711 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0102 14:46:06.473733 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0102 14:46:06.501484 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0102 14:46:06.501601 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0102 14:46:06.549200 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0102 14:46:06.549230 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0102 14:46:06.609498 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0102 14:46:06.609519 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0102 14:46:06.620088 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0102 14:46:06.620111 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0102 14:46:06.624148 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0102 14:46:06.624171 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0102 14:46:06.626830 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0102 14:46:06.626869 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0102 14:46:08.839164 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2023-01-02 14:45:46 UTC, end at Mon 2023-01-02 15:03:10 UTC. -- Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.297205 2036 policy_none.go:49] "None policy: Start" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299200 2036 memory_manager.go:168] "Starting memorymanager" policy="None" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299220 2036 state_mem.go:35] "Initializing new in-memory state store" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299324 2036 state_mem.go:75] "Updated machine memory state" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299597 2036 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299613 2036 status_manager.go:161] "Starting to sync pod status with apiserver" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.299625 2036 kubelet.go:2010] "Starting kubelet main sync loop" Jan 02 14:46:08 minikube kubelet[2036]: E0102 14:46:08.299655 2036 kubelet.go:2034] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.311288 2036 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.311565 2036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.370932 2036 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.396767 2036 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.396903 2036 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.400575 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.400653 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.400701 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.400949 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.565570 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.565713 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.565831 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.566757 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.567048 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.567246 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/bd495b7643dfc9d3194bd002e968bc3d-etcd-data\") pod \"etcd-minikube\" (UID: \"bd495b7643dfc9d3194bd002e968bc3d\") " pod="kube-system/etcd-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.567410 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd8b8fe30652798a5ae3fc51e66ef681-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"bd8b8fe30652798a5ae3fc51e66ef681\") " pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.567613 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.567931 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91c789ffd31943242ef457881ac1824a-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"91c789ffd31943242ef457881ac1824a\") " pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568104 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e6ece94cd0950fdbbf66ddae1e4c53b-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"7e6ece94cd0950fdbbf66ddae1e4c53b\") " pod="kube-system/kube-scheduler-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568202 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bd495b7643dfc9d3194bd002e968bc3d-etcd-certs\") pod \"etcd-minikube\" (UID: \"bd495b7643dfc9d3194bd002e968bc3d\") " pod="kube-system/etcd-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568259 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd8b8fe30652798a5ae3fc51e66ef681-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"bd8b8fe30652798a5ae3fc51e66ef681\") " pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568323 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd8b8fe30652798a5ae3fc51e66ef681-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"bd8b8fe30652798a5ae3fc51e66ef681\") " pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568390 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd8b8fe30652798a5ae3fc51e66ef681-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"bd8b8fe30652798a5ae3fc51e66ef681\") " pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:08 minikube kubelet[2036]: I0102 14:46:08.568450 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd8b8fe30652798a5ae3fc51e66ef681-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"bd8b8fe30652798a5ae3fc51e66ef681\") " pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:09 minikube kubelet[2036]: I0102 14:46:09.242979 2036 apiserver.go:52] "Watching apiserver" Jan 02 14:46:09 minikube kubelet[2036]: I0102 14:46:09.477271 2036 reconciler.go:169] "Reconciler: start to sync state" Jan 02 14:46:09 minikube kubelet[2036]: E0102 14:46:09.865431 2036 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Jan 02 14:46:10 minikube kubelet[2036]: E0102 14:46:10.060187 2036 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jan 02 14:46:10 minikube kubelet[2036]: E0102 14:46:10.268627 2036 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.110546 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.155510 2036 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.156046 2036 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.164418 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2spv\" (UniqueName: \"kubernetes.io/projected/7b5582e8-6fcb-4451-ae04-bc5363271c31-kube-api-access-v2spv\") pod \"storage-provisioner\" (UID: \"7b5582e8-6fcb-4451-ae04-bc5363271c31\") " pod="kube-system/storage-provisioner" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.164452 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7b5582e8-6fcb-4451-ae04-bc5363271c31-tmp\") pod \"storage-provisioner\" (UID: \"7b5582e8-6fcb-4451-ae04-bc5363271c31\") " pod="kube-system/storage-provisioner" Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.281187 2036 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.281286 2036 projected.go:196] Error preparing data for projected volume kube-api-access-v2spv for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.281463 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b5582e8-6fcb-4451-ae04-bc5363271c31-kube-api-access-v2spv podName:7b5582e8-6fcb-4451-ae04-bc5363271c31 nodeName:}" failed. No retries permitted until 2023-01-02 14:46:21.781399608 +0000 UTC m=+13.603433556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2spv" (UniqueName: "kubernetes.io/projected/7b5582e8-6fcb-4451-ae04-bc5363271c31-kube-api-access-v2spv") pod "storage-provisioner" (UID: "7b5582e8-6fcb-4451-ae04-bc5363271c31") : configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.815639 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.871423 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea1e3558-8475-4afb-a34b-b1791201a461-xtables-lock\") pod \"kube-proxy-df49z\" (UID: \"ea1e3558-8475-4afb-a34b-b1791201a461\") " pod="kube-system/kube-proxy-df49z" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.871469 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea1e3558-8475-4afb-a34b-b1791201a461-lib-modules\") pod \"kube-proxy-df49z\" (UID: \"ea1e3558-8475-4afb-a34b-b1791201a461\") " pod="kube-system/kube-proxy-df49z" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.871494 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97rl\" (UniqueName: \"kubernetes.io/projected/ea1e3558-8475-4afb-a34b-b1791201a461-kube-api-access-s97rl\") pod \"kube-proxy-df49z\" (UID: \"ea1e3558-8475-4afb-a34b-b1791201a461\") " pod="kube-system/kube-proxy-df49z" Jan 02 14:46:21 minikube kubelet[2036]: I0102 14:46:21.871532 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea1e3558-8475-4afb-a34b-b1791201a461-kube-proxy\") pod \"kube-proxy-df49z\" (UID: \"ea1e3558-8475-4afb-a34b-b1791201a461\") " pod="kube-system/kube-proxy-df49z" Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.871648 2036 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.871664 2036 projected.go:196] Error preparing data for projected volume kube-api-access-v2spv for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.871701 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b5582e8-6fcb-4451-ae04-bc5363271c31-kube-api-access-v2spv podName:7b5582e8-6fcb-4451-ae04-bc5363271c31 nodeName:}" failed. No retries permitted until 2023-01-02 14:46:22.871688958 +0000 UTC m=+14.693722800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2spv" (UniqueName: "kubernetes.io/projected/7b5582e8-6fcb-4451-ae04-bc5363271c31-kube-api-access-v2spv") pod "storage-provisioner" (UID: "7b5582e8-6fcb-4451-ae04-bc5363271c31") : configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.989352 2036 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.989375 2036 projected.go:196] Error preparing data for projected volume kube-api-access-s97rl for pod kube-system/kube-proxy-df49z: configmap "kube-root-ca.crt" not found Jan 02 14:46:21 minikube kubelet[2036]: E0102 14:46:21.989417 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1e3558-8475-4afb-a34b-b1791201a461-kube-api-access-s97rl podName:ea1e3558-8475-4afb-a34b-b1791201a461 nodeName:}" failed. No retries permitted until 2023-01-02 14:46:22.489404342 +0000 UTC m=+14.311438186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s97rl" (UniqueName: "kubernetes.io/projected/ea1e3558-8475-4afb-a34b-b1791201a461-kube-api-access-s97rl") pod "kube-proxy-df49z" (UID: "ea1e3558-8475-4afb-a34b-b1791201a461") : configmap "kube-root-ca.crt" not found Jan 02 14:46:22 minikube kubelet[2036]: I0102 14:46:22.133035 2036 topology_manager.go:205] "Topology Admit Handler" Jan 02 14:46:22 minikube kubelet[2036]: I0102 14:46:22.173705 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217585a8-cd26-40c1-a434-7aa0030c3c0d-config-volume\") pod \"coredns-565d847f94-j64r9\" (UID: \"217585a8-cd26-40c1-a434-7aa0030c3c0d\") " pod="kube-system/coredns-565d847f94-j64r9" Jan 02 14:46:22 minikube kubelet[2036]: I0102 14:46:22.173740 2036 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9kzh\" (UniqueName: \"kubernetes.io/projected/217585a8-cd26-40c1-a434-7aa0030c3c0d-kube-api-access-w9kzh\") pod \"coredns-565d847f94-j64r9\" (UID: \"217585a8-cd26-40c1-a434-7aa0030c3c0d\") " pod="kube-system/coredns-565d847f94-j64r9" Jan 02 14:46:53 minikube kubelet[2036]: I0102 14:46:53.810694 2036 scope.go:115] "RemoveContainer" containerID="dcd6c0b6867c9dc5a99519a91c7a41151b2bc38605b332d84c3a94924af019ca" * * ==> storage-provisioner [641d667daf08] <== * I0102 14:46:53.975556 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0102 14:46:53.982531 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0102 14:46:53.982572 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0102 14:46:53.988968 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0102 14:46:53.989015 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d1b31b2-a728-4c13-8b0a-146d0a5d5685", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_4abe9fc6-ca2c-4ea2-b9d8-ffebc09b4fc7 became leader I0102 14:46:53.989074 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_4abe9fc6-ca2c-4ea2-b9d8-ffebc09b4fc7! I0102 14:46:54.090185 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_4abe9fc6-ca2c-4ea2-b9d8-ffebc09b4fc7! * * ==> storage-provisioner [dcd6c0b6867c] <== * I0102 14:46:23.377389 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0102 14:46:53.380070 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout