| delete | --all | minikube | testuser | v1.26.0 | 30 Jun 22 10:37 IST | 30 Jun 22 10:38 IST | | start | --driver=docker | minikube | testuser | v1.26.0 | 30 Jun 22 10:38 IST | | | | --force-systemd=true | | | | | | |--------------|--------------------------------|----------|----------------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2022/06/30 10:38:32 Running on machine: INPU1H23212 Binary: Built with gc go1.18.3 for windows/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0630 10:38:32.008910 5076 out.go:296] Setting OutFile to fd 88 ... I0630 10:38:32.025796 5076 out.go:343] TERM=,COLORTERM=, which probably does not support color I0630 10:38:32.026809 5076 out.go:309] Setting ErrFile to fd 92... I0630 10:38:32.026809 5076 out.go:343] TERM=,COLORTERM=, which probably does not support color I0630 10:38:34.675019 5076 out.go:303] Setting JSON to false I0630 10:38:34.682363 5076 start.go:115] hostinfo: {"hostname":"INPU1WHP6153900","uptime":4492,"bootTime":1656561222,"procs":311,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9596e9a5-bdf8-44e6-acfe-99a94c4e646a"} W0630 10:38:34.682363 5076 start.go:123] gopshost.Virtualization returned error: not implemented yet I0630 10:38:34.683868 5076 out.go:177] * minikube v1.26.0 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042 I0630 10:38:34.685104 5076 notify.go:193] Checking for updates... I0630 10:38:34.687933 5076 driver.go:360] Setting default libvirt URI to qemu:///system I0630 10:38:35.274894 5076 docker.go:137] docker version: linux-20.10.16 I0630 10:38:35.283870 5076 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0630 10:38:39.438226 5076 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (4.1543532s) I0630 10:38:39.438795 5076 info.go:265] docker info: {ID:A5QY:KA3W:JHVE:AITZ:GCC7:7ZSD:J4UB:TFKF:ISHL:7KQK:T5NH:BDBB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-30 05:08:36.560515248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26626342912 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0630 10:38:39.439921 5076 out.go:177] * Using the docker driver based on user configuration I0630 10:38:39.441628 5076 start.go:284] selected driver: docker I0630 10:38:39.441628 5076 start.go:805] validating driver "docker" against I0630 10:38:39.441628 5076 start.go:816] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0630 10:38:39.451050 5076 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0630 10:38:41.946364 5076 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4953123s) I0630 10:38:41.946364 5076 info.go:265] docker info: {ID:A5QY:KA3W:JHVE:AITZ:GCC7:7ZSD:J4UB:TFKF:ISHL:7KQK:T5NH:BDBB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-30 05:08:39.941081789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26626342912 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0630 10:38:41.947455 5076 start_flags.go:296] no existing cluster config was found, will generate one from the flags I0630 10:38:42.130135 5076 start_flags.go:377] Using suggested 8100MB memory alloc based on sys=32431MB, container=25392MB I0630 10:38:42.130135 5076 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] I0630 10:38:42.131226 5076 out.go:177] * Using Docker Desktop driver with root privileges I0630 10:38:42.132523 5076 cni.go:95] Creating CNI manager for "" I0630 10:38:42.132523 5076 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0630 10:38:42.132523 5076 start_flags.go:310] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\E0678235:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0630 10:38:42.134306 5076 out.go:177] * Starting control plane node minikube in cluster minikube I0630 10:38:42.142038 5076 cache.go:120] Beginning downloading kic base image for docker with docker I0630 10:38:42.143389 5076 out.go:177] * Pulling base image ... I0630 10:38:42.144421 5076 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon I0630 10:38:42.144421 5076 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0630 10:38:42.145089 5076 preload.go:148] Found local preload: C:\Users\E0678235\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 I0630 10:38:42.145089 5076 cache.go:57] Caching tarball of preloaded images I0630 10:38:42.146765 5076 preload.go:174] Found C:\Users\E0678235\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0630 10:38:42.146765 5076 cache.go:60] Finished verifying existence of preloaded tar for v1.24.1 on docker I0630 10:38:42.147905 5076 profile.go:148] Saving config to C:\Users\E0678235\.minikube\profiles\minikube\config.json ... I0630 10:38:42.148435 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\config.json: {Name:mk5827622f8ffe3d1d0556da899e6c5c9862bf48 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:38:43.101473 5076 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon, skipping pull I0630 10:38:43.101473 5076 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 exists in daemon, skipping load I0630 10:38:43.101473 5076 cache.go:208] Successfully downloaded all kic artifacts I0630 10:38:43.101719 5076 start.go:352] acquiring machines lock for minikube: {Name:mkf663b59f7e99d6dccfbd7905dc22cdededf38f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0630 10:38:43.103053 5076 start.go:356] acquired machines lock for "minikube" in 1.3339ms I0630 10:38:43.103053 5076 start.go:91] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\E0678235:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0630 10:38:43.103053 5076 start.go:131] createHost starting for "" (driver="docker") I0630 10:38:43.104337 5076 out.go:204] * Creating docker container (CPUs=2, Memory=8100MB) ... I0630 10:38:43.104889 5076 start.go:165] libmachine.API.Create for "minikube" (driver="docker") I0630 10:38:43.104889 5076 client.go:168] LocalClient.Create starting I0630 10:38:43.107581 5076 main.go:134] libmachine: Reading certificate data from C:\Users\E0678235\.minikube\certs\ca.pem I0630 10:38:43.110839 5076 main.go:134] libmachine: Decoding PEM data... I0630 10:38:43.110839 5076 main.go:134] libmachine: Parsing certificate... I0630 10:38:43.110839 5076 main.go:134] libmachine: Reading certificate data from C:\Users\E0678235\.minikube\certs\cert.pem I0630 10:38:43.114987 5076 main.go:134] libmachine: Decoding PEM data... I0630 10:38:43.114987 5076 main.go:134] libmachine: Parsing certificate... I0630 10:38:43.127537 5076 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0630 10:38:44.528086 5076 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0630 10:38:44.528086 5076 cli_runner.go:217] Completed: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4005474s) I0630 10:38:44.533888 5076 network_create.go:272] running [docker network inspect minikube] to gather additional debugging logs... I0630 10:38:44.533888 5076 cli_runner.go:164] Run: docker network inspect minikube W0630 10:38:45.277755 5076 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0630 10:38:45.277899 5076 network_create.go:275] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0630 10:38:45.277899 5076 network_create.go:277] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0630 10:38:45.283938 5076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0630 10:38:46.008378 5076 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b7cdb0] misses:0} I0630 10:38:46.008396 5076 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0630 10:38:46.008396 5076 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0630 10:38:46.013293 5076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0630 10:38:48.123389 5076 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube: (2.1100152s) I0630 10:38:48.123389 5076 network_create.go:99] docker network minikube 192.168.49.0/24 created I0630 10:38:48.123389 5076 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0630 10:38:48.140822 5076 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0630 10:38:49.319796 5076 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1789735s) I0630 10:38:49.324234 5076 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0630 10:38:50.244312 5076 oci.go:103] Successfully created a docker volume minikube I0630 10:38:50.249500 5076 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -d /var/lib I0630 10:38:52.346802 5076 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -d /var/lib: (2.0973008s) I0630 10:38:52.346802 5076 oci.go:107] Successfully prepared a docker volume minikube I0630 10:38:52.346802 5076 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0630 10:38:52.346802 5076 kic.go:179] Starting extracting preloaded images to volume ... I0630 10:38:52.355047 5076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\E0678235\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -I lz4 -xf /preloaded.tar -C /extractDir I0630 10:39:05.574915 5076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\E0678235\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -I lz4 -xf /preloaded.tar -C /extractDir: (13.2198603s) I0630 10:39:05.574915 5076 kic.go:188] duration metric: took 13.228105 seconds to extract preloaded images to volume I0630 10:39:05.581385 5076 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0630 10:39:08.077381 5076 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4959946s) I0630 10:39:08.077381 5076 info.go:265] docker info: {ID:A5QY:KA3W:JHVE:AITZ:GCC7:7ZSD:J4UB:TFKF:ISHL:7KQK:T5NH:BDBB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-30 05:09:06.436190097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26626342912 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0630 10:39:08.083671 5076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0630 10:39:10.379536 5076 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2958636s) I0630 10:39:10.385370 5076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8100mb --memory-swap=8100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 I0630 10:39:13.001417 5076 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8100mb --memory-swap=8100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95: (2.6160451s) I0630 10:39:13.008162 5076 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0630 10:39:13.674575 5076 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0630 10:39:14.936487 5076 cli_runner.go:217] Completed: docker container inspect minikube --format={{.State.Status}}: (1.2619108s) I0630 10:39:14.940888 5076 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0630 10:39:15.904675 5076 oci.go:144] the created container "minikube" has a running status. I0630 10:39:15.904675 5076 kic.go:210] Creating ssh key for kic: C:\Users\E0678235\.minikube\machines\minikube\id_rsa... I0630 10:39:16.075300 5076 kic_runner.go:191] docker (temp): C:\Users\E0678235\.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0630 10:39:17.310375 5076 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0630 10:39:18.334503 5076 cli_runner.go:217] Completed: docker container inspect minikube --format={{.State.Status}}: (1.0241275s) I0630 10:39:18.348423 5076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0630 10:39:18.348423 5076 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0630 10:39:20.355860 5076 kic_runner.go:123] Done: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]: (2.0073888s) I0630 10:39:20.358654 5076 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\E0678235\.minikube\machines\minikube\id_rsa... I0630 10:39:23.114195 5076 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0630 10:39:23.677214 5076 machine.go:88] provisioning docker machine ... I0630 10:39:23.677214 5076 ubuntu.go:169] provisioning hostname "minikube" I0630 10:39:23.682194 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:24.726889 5076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.0446949s) I0630 10:39:24.730057 5076 main.go:134] libmachine: Using SSH client type: native I0630 10:39:24.731578 5076 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x12b3d20] 0x12b6b80 [] 0s} 127.0.0.1 60306 } I0630 10:39:24.731578 5076 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0630 10:39:24.889453 5076 main.go:134] libmachine: SSH cmd err, output: : minikube I0630 10:39:24.897604 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:25.748539 5076 main.go:134] libmachine: Using SSH client type: native I0630 10:39:25.749771 5076 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x12b3d20] 0x12b6b80 [] 0s} 127.0.0.1 60306 } I0630 10:39:25.749771 5076 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0630 10:39:25.887052 5076 main.go:134] libmachine: SSH cmd err, output: : I0630 10:39:25.887052 5076 ubuntu.go:175] set auth options {CertDir:C:\Users\E0678235\.minikube CaCertPath:C:\Users\E0678235\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\E0678235\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\E0678235\.minikube\machines\server.pem ServerKeyPath:C:\Users\E0678235\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\E0678235\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\E0678235\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\E0678235\.minikube} I0630 10:39:25.887052 5076 ubuntu.go:177] setting up certificates I0630 10:39:25.887052 5076 provision.go:83] configureAuth start I0630 10:39:25.892470 5076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0630 10:39:26.502848 5076 provision.go:138] copyHostCerts I0630 10:39:26.507105 5076 exec_runner.go:144] found C:\Users\E0678235\.minikube/key.pem, removing ... I0630 10:39:26.507105 5076 exec_runner.go:207] rm: C:\Users\E0678235\.minikube\key.pem I0630 10:39:26.510575 5076 exec_runner.go:151] cp: C:\Users\E0678235\.minikube\certs\key.pem --> C:\Users\E0678235\.minikube/key.pem (1675 bytes) I0630 10:39:26.517023 5076 exec_runner.go:144] found C:\Users\E0678235\.minikube/ca.pem, removing ... I0630 10:39:26.517023 5076 exec_runner.go:207] rm: C:\Users\E0678235\.minikube\ca.pem I0630 10:39:26.518657 5076 exec_runner.go:151] cp: C:\Users\E0678235\.minikube\certs\ca.pem --> C:\Users\E0678235\.minikube/ca.pem (1082 bytes) I0630 10:39:26.524229 5076 exec_runner.go:144] found C:\Users\E0678235\.minikube/cert.pem, removing ... I0630 10:39:26.524229 5076 exec_runner.go:207] rm: C:\Users\E0678235\.minikube\cert.pem I0630 10:39:26.526483 5076 exec_runner.go:151] cp: C:\Users\E0678235\.minikube\certs\cert.pem --> C:\Users\E0678235\.minikube/cert.pem (1127 bytes) I0630 10:39:26.530363 5076 provision.go:112] generating server cert: C:\Users\E0678235\.minikube\machines\server.pem ca-key=C:\Users\E0678235\.minikube\certs\ca.pem private-key=C:\Users\E0678235\.minikube\certs\ca-key.pem org=E0678235.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0630 10:39:26.618711 5076 provision.go:172] copyRemoteCerts I0630 10:39:26.629066 5076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0630 10:39:26.634303 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:27.373672 5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60306 SSHKeyPath:C:\Users\E0678235\.minikube\machines\minikube\id_rsa Username:docker} I0630 10:39:27.492040 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes) I0630 10:39:27.520429 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes) I0630 10:39:27.543933 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0630 10:39:27.566732 5076 provision.go:86] duration metric: configureAuth took 1.6796781s I0630 10:39:27.566732 5076 ubuntu.go:193] setting minikube options for container-runtime I0630 10:39:27.568483 5076 config.go:178] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0630 10:39:27.575136 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:28.709699 5076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.1345631s) I0630 10:39:28.712431 5076 main.go:134] libmachine: Using SSH client type: native I0630 10:39:28.714599 5076 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x12b3d20] 0x12b6b80 [] 0s} 127.0.0.1 60306 } I0630 10:39:28.714599 5076 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0630 10:39:28.867063 5076 main.go:134] libmachine: SSH cmd err, output: : overlay I0630 10:39:28.867063 5076 ubuntu.go:71] root file system type: overlay I0630 10:39:28.867063 5076 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0630 10:39:28.871809 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:29.629034 5076 main.go:134] libmachine: Using SSH client type: native I0630 10:39:29.630315 5076 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x12b3d20] 0x12b6b80 [] 0s} 127.0.0.1 60306 } I0630 10:39:29.630315 5076 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0630 10:39:29.744112 5076 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0630 10:39:29.749459 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:30.547722 5076 main.go:134] libmachine: Using SSH client type: native I0630 10:39:30.549156 5076 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x12b3d20] 0x12b6b80 [] 0s} 127.0.0.1 60306 } I0630 10:39:30.549156 5076 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0630 10:39:33.359707 5076 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-06-30 05:09:29.733695027 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0630 10:39:33.359707 5076 machine.go:91] provisioned docker machine in 9.6824867s I0630 10:39:33.359707 5076 client.go:171] LocalClient.Create took 50.2547883s I0630 10:39:33.359707 5076 start.go:173] duration metric: libmachine.API.Create for "minikube" took 50.2547883s I0630 10:39:33.360211 5076 start.go:306] post-start starting for "minikube" (driver="docker") I0630 10:39:33.360220 5076 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0630 10:39:33.373070 5076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0630 10:39:33.378338 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:34.144297 5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60306 SSHKeyPath:C:\Users\E0678235\.minikube\machines\minikube\id_rsa Username:docker} I0630 10:39:34.261994 5076 ssh_runner.go:195] Run: cat /etc/os-release I0630 10:39:34.267025 5076 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0630 10:39:34.267025 5076 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0630 10:39:34.267025 5076 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0630 10:39:34.267025 5076 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0630 10:39:34.267025 5076 filesync.go:126] Scanning C:\Users\E0678235\.minikube\addons for local assets ... I0630 10:39:34.268475 5076 filesync.go:126] Scanning C:\Users\E0678235\.minikube\files for local assets ... I0630 10:39:34.270902 5076 start.go:309] post-start completed in 910.6809ms I0630 10:39:34.298625 5076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0630 10:39:35.074126 5076 profile.go:148] Saving config to C:\Users\E0678235\.minikube\profiles\minikube\config.json ... I0630 10:39:35.100554 5076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0630 10:39:35.105574 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:36.520657 5076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.4150822s) I0630 10:39:36.520897 5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60306 SSHKeyPath:C:\Users\E0678235\.minikube\machines\minikube\id_rsa Username:docker} I0630 10:39:36.626828 5076 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.5262735s) I0630 10:39:36.636703 5076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0630 10:39:36.642161 5076 start.go:134] duration metric: createHost completed in 53.5390756s I0630 10:39:36.642161 5076 start.go:81] releasing machines lock for "minikube", held for 53.5390756s I0630 10:39:36.647324 5076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0630 10:39:37.422228 5076 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0630 10:39:37.429165 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:37.433506 5076 ssh_runner.go:195] Run: systemctl --version I0630 10:39:37.444796 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0630 10:39:38.651103 5076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.2063058s) I0630 10:39:38.651103 5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60306 SSHKeyPath:C:\Users\E0678235\.minikube\machines\minikube\id_rsa Username:docker} I0630 10:39:38.714617 5076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.2854516s) I0630 10:39:38.714617 5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60306 SSHKeyPath:C:\Users\E0678235\.minikube\machines\minikube\id_rsa Username:docker} I0630 10:39:38.745250 5076 ssh_runner.go:235] Completed: systemctl --version: (1.3117425s) I0630 10:39:38.755003 5076 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0630 10:39:38.767682 5076 cruntime.go:273] skipping containerd shutdown because we are bound to it I0630 10:39:38.777404 5076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0630 10:39:39.885015 5076 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.4627856s) I0630 10:39:39.885015 5076 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service crio: (1.1076095s) W0630 10:39:39.885015 5076 start.go:731] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 60 stdout: stderr: curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. I0630 10:39:39.885015 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" W0630 10:39:39.885015 5076 out.go:239] ! This container is having trouble accessing https://k8s.gcr.io W0630 10:39:39.886418 5076 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0630 10:39:39.921048 5076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0630 10:39:40.015497 5076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0630 10:39:40.115968 5076 docker.go:502] Forcing docker to use systemd as cgroup manager... I0630 10:39:40.115968 5076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0630 10:39:40.144325 5076 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0630 10:39:40.259861 5076 ssh_runner.go:195] Run: sudo systemctl restart docker I0630 10:39:42.383025 5076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.1231627s) I0630 10:39:42.394685 5076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0630 10:39:42.517873 5076 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0630 10:39:42.632117 5076 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0630 10:39:42.648120 5076 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock I0630 10:39:42.660644 5076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0630 10:39:42.665725 5076 start.go:468] Will wait 60s for crictl version I0630 10:39:42.677378 5076 ssh_runner.go:195] Run: sudo crictl version I0630 10:39:42.804577 5076 start.go:477] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0630 10:39:42.810460 5076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0630 10:39:42.858756 5076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0630 10:39:42.899343 5076 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.17 ... I0630 10:39:42.909067 5076 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal I0630 10:39:43.854491 5076 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0630 10:39:43.862156 5076 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0630 10:39:43.869347 5076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0630 10:39:43.887388 5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0630 10:39:44.655883 5076 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0630 10:39:44.660832 5076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0630 10:39:44.700966 5076 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0630 10:39:44.700966 5076 docker.go:533] Images already preloaded, skipping extraction I0630 10:39:44.706625 5076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0630 10:39:44.743668 5076 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0630 10:39:44.743683 5076 cache_images.go:84] Images are preloaded, skipping loading I0630 10:39:44.750752 5076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0630 10:39:44.840067 5076 cni.go:95] Creating CNI manager for "" I0630 10:39:44.840067 5076 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0630 10:39:44.840067 5076 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0630 10:39:44.840067 5076 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0630 10:39:44.840067 5076 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.24.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0630 10:39:44.840067 5076 kubeadm.go:961] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0630 10:39:44.854397 5076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1 I0630 10:39:44.864794 5076 binaries.go:44] Found k8s binaries, skipping transfer I0630 10:39:44.874282 5076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0630 10:39:44.884197 5076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0630 10:39:44.900374 5076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0630 10:39:44.915858 5076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0630 10:39:44.944124 5076 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0630 10:39:44.950672 5076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0630 10:39:44.963270 5076 certs.go:54] Setting up C:\Users\E0678235\.minikube\profiles\minikube for IP: 192.168.49.2 I0630 10:39:44.972128 5076 certs.go:182] skipping minikubeCA CA generation: C:\Users\E0678235\.minikube\ca.key I0630 10:39:44.978169 5076 certs.go:182] skipping proxyClientCA CA generation: C:\Users\E0678235\.minikube\proxy-client-ca.key I0630 10:39:44.979790 5076 certs.go:302] generating minikube-user signed cert: C:\Users\E0678235\.minikube\profiles\minikube\client.key I0630 10:39:44.981501 5076 crypto.go:68] Generating cert C:\Users\E0678235\.minikube\profiles\minikube\client.crt with IP's: [] I0630 10:39:45.028294 5076 crypto.go:156] Writing cert to C:\Users\E0678235\.minikube\profiles\minikube\client.crt ... I0630 10:39:45.028294 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\client.crt: {Name:mkf43d2a9c8ae3bc7017578c1d14c3625b3fe5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.038939 5076 crypto.go:164] Writing key to C:\Users\E0678235\.minikube\profiles\minikube\client.key ... I0630 10:39:45.038939 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\client.key: {Name:mk361548f3452c009c483363cf18ea2f001f7a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.049478 5076 certs.go:302] generating minikube signed cert: C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 I0630 10:39:45.049478 5076 crypto.go:68] Generating cert C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0630 10:39:45.444253 5076 crypto.go:156] Writing cert to C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 ... I0630 10:39:45.444253 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2: {Name:mke349cdeee39c9d54e8ef44c7e0b13249f2eac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.454368 5076 crypto.go:164] Writing key to C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 ... I0630 10:39:45.454368 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key.dd3b5fb2: {Name:mkd64a93a45bb384e7fcb6a9e2f9bcac96229dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.465554 5076 certs.go:320] copying C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 -> C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt I0630 10:39:45.471559 5076 certs.go:324] copying C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 -> C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key I0630 10:39:45.478193 5076 certs.go:302] generating aggregator signed cert: C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.key I0630 10:39:45.478193 5076 crypto.go:68] Generating cert C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.crt with IP's: [] I0630 10:39:45.757328 5076 crypto.go:156] Writing cert to C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.crt ... I0630 10:39:45.757328 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.crt: {Name:mk1079d5e6552f2ebaac660b577e6676b3d080c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.768091 5076 crypto.go:164] Writing key to C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.key ... I0630 10:39:45.768091 5076 lock.go:35] WriteFile acquiring C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.key: {Name:mkec2ed31a0e8d217a6ccb4440940e87c8c5ea61 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0630 10:39:45.778773 5076 certs.go:388] found cert: C:\Users\E0678235\.minikube\certs\C:\Users\E0678235\.minikube\certs\ca-key.pem (1675 bytes) I0630 10:39:45.778773 5076 certs.go:388] found cert: C:\Users\E0678235\.minikube\certs\C:\Users\E0678235\.minikube\certs\ca.pem (1082 bytes) I0630 10:39:45.778773 5076 certs.go:388] found cert: C:\Users\E0678235\.minikube\certs\C:\Users\E0678235\.minikube\certs\cert.pem (1127 bytes) I0630 10:39:45.778773 5076 certs.go:388] found cert: C:\Users\E0678235\.minikube\certs\C:\Users\E0678235\.minikube\certs\key.pem (1675 bytes) I0630 10:39:45.784327 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0630 10:39:45.809166 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0630 10:39:45.833955 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0630 10:39:45.856560 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0630 10:39:45.882321 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0630 10:39:45.909230 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0630 10:39:45.934469 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0630 10:39:45.958593 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0630 10:39:45.984343 5076 ssh_runner.go:362] scp C:\Users\E0678235\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0630 10:39:46.007284 5076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0630 10:39:46.036456 5076 ssh_runner.go:195] Run: openssl version I0630 10:39:46.053987 5076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0630 10:39:46.076630 5076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0630 10:39:46.085478 5076 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 07:08 /usr/share/ca-certificates/minikubeCA.pem I0630 10:39:46.101030 5076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0630 10:39:46.121237 5076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0630 10:39:46.132684 5076 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\E0678235:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0630 10:39:46.139068 5076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0630 10:39:46.185067 5076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0630 10:39:46.209204 5076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0630 10:39:46.222208 5076 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0630 10:39:46.234735 5076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0630 10:39:46.245885 5076 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0630 10:39:46.245885 5076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0630 10:43:48.326968 5076 out.go:204] - Generating certificates and keys ... I0630 10:43:48.331764 5076 out.go:204] - Booting up control plane ... W0630 10:43:48.333858 5076 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' v stderr: W0630 05:09:46.286461 1199 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0630 10:43:48.335527 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0630 10:43:49.448086 5076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.112558s) I0630 10:43:49.457602 5076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0630 10:43:49.469235 5076 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0630 10:43:49.478797 5076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0630 10:43:49.488301 5076 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0630 10:43:49.488301 5076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0630 10:47:50.504633 5076 out.go:204] - Generating certificates and keys ... I0630 10:47:50.509374 5076 out.go:204] - Booting up control plane ... I0630 10:47:50.512583 5076 kubeadm.go:397] StartCluster complete in 8m4.3796085s I0630 10:47:50.512583 5076 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0630 10:47:50.522828 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0630 10:47:50.550220 5076 cri.go:87] found id: "" I0630 10:47:50.550220 5076 logs.go:274] 0 containers: [] W0630 10:47:50.550220 5076 logs.go:276] No container was found matching "kube-apiserver" I0630 10:47:50.550220 5076 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0630 10:47:50.562860 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0630 10:47:50.594029 5076 cri.go:87] found id: "" I0630 10:47:50.594029 5076 logs.go:274] 0 containers: [] W0630 10:47:50.594029 5076 logs.go:276] No container was found matching "etcd" I0630 10:47:50.594029 5076 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0630 10:47:50.604157 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0630 10:47:50.630454 5076 cri.go:87] found id: "" I0630 10:47:50.630454 5076 logs.go:274] 0 containers: [] W0630 10:47:50.630454 5076 logs.go:276] No container was found matching "coredns" I0630 10:47:50.630454 5076 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0630 10:47:50.648208 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0630 10:47:50.676461 5076 cri.go:87] found id: "" I0630 10:47:50.676461 5076 logs.go:274] 0 containers: [] W0630 10:47:50.676461 5076 logs.go:276] No container was found matching "kube-scheduler" I0630 10:47:50.676461 5076 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0630 10:47:50.686269 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0630 10:47:50.716295 5076 cri.go:87] found id: "" I0630 10:47:50.716295 5076 logs.go:274] 0 containers: [] W0630 10:47:50.716295 5076 logs.go:276] No container was found matching "kube-proxy" I0630 10:47:50.716295 5076 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]} I0630 10:47:50.730646 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0630 10:47:50.760336 5076 cri.go:87] found id: "" I0630 10:47:50.760336 5076 logs.go:274] 0 containers: [] W0630 10:47:50.760336 5076 logs.go:276] No container was found matching "kubernetes-dashboard" I0630 10:47:50.760336 5076 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]} I0630 10:47:50.771818 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0630 10:47:50.800785 5076 cri.go:87] found id: "" I0630 10:47:50.800785 5076 logs.go:274] 0 containers: [] W0630 10:47:50.800785 5076 logs.go:276] No container was found matching "storage-provisioner" I0630 10:47:50.800785 5076 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0630 10:47:50.813198 5076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0630 10:47:50.839919 5076 cri.go:87] found id: "" I0630 10:47:50.839919 5076 logs.go:274] 0 containers: [] W0630 10:47:50.839919 5076 logs.go:276] No container was found matching "kube-controller-manager" I0630 10:47:50.839919 5076 logs.go:123] Gathering logs for kubelet ... I0630 10:47:50.839919 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0630 10:47:50.915008 5076 logs.go:123] Gathering logs for dmesg ... I0630 10:47:50.915008 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0630 10:47:50.935234 5076 logs.go:123] Gathering logs for describe nodes ... I0630 10:47:50.935234 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0630 10:47:50.990564 5076 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0630 10:47:50.990564 5076 logs.go:123] Gathering logs for Docker ... I0630 10:47:50.990564 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0630 10:47:51.051267 5076 logs.go:123] Gathering logs for container status ... I0630 10:47:51.051267 5076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" W0630 10:47:51.088294 5076 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0630 05:13:49.521219 2998 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0630 10:47:51.088294 5076 out.go:239] * W0630 10:47:51.089915 5076 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0630 05:13:49.521219 2998 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0630 10:47:51.090448 5076 out.go:239] * W0630 10:47:51.091855 5076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ I0630 10:47:51.093959 5076 out.go:177] W0630 10:47:51.095028 5076 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0630 05:13:49.521219 2998 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0630 10:47:51.095570 5076 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0630 10:47:51.096249 5076 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172 I0630 10:47:51.097375 5076 out.go:177] * ! Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.3455868s * Restarting the docker service may improve performance. * ==> Docker <== * -- Logs begin at Thu 2022-06-30 05:09:13 UTC, end at Thu 2022-06-30 05:25:24 UTC. -- Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.444256776Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.444317739Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.447844752Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.456523360Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.456557299Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:21 minikube dockerd[740]: time="2022-06-30T05:24:21.459619007Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:25 minikube dockerd[740]: time="2022-06-30T05:24:25.399328129Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:25 minikube dockerd[740]: time="2022-06-30T05:24:25.399385514Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:25 minikube dockerd[740]: time="2022-06-30T05:24:25.402695903Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:28 minikube dockerd[740]: time="2022-06-30T05:24:28.469754388Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:28 minikube dockerd[740]: time="2022-06-30T05:24:28.469812912Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:28 minikube dockerd[740]: time="2022-06-30T05:24:28.473277960Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:33 minikube dockerd[740]: time="2022-06-30T05:24:33.457234066Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:33 minikube dockerd[740]: time="2022-06-30T05:24:33.457278703Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:33 minikube dockerd[740]: time="2022-06-30T05:24:33.460189365Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:34 minikube dockerd[740]: time="2022-06-30T05:24:34.405826169Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:34 minikube dockerd[740]: time="2022-06-30T05:24:34.405866138Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:34 minikube dockerd[740]: time="2022-06-30T05:24:34.409387594Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:39 minikube dockerd[740]: time="2022-06-30T05:24:39.436171303Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:39 minikube dockerd[740]: time="2022-06-30T05:24:39.436224288Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:39 minikube dockerd[740]: time="2022-06-30T05:24:39.444729482Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:41 minikube dockerd[740]: time="2022-06-30T05:24:41.402763170Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:41 minikube dockerd[740]: time="2022-06-30T05:24:41.402845591Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:41 minikube dockerd[740]: time="2022-06-30T05:24:41.406812801Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:45 minikube dockerd[740]: time="2022-06-30T05:24:45.441253689Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:45 minikube dockerd[740]: time="2022-06-30T05:24:45.441282522Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:45 minikube dockerd[740]: time="2022-06-30T05:24:45.444091888Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:49 minikube dockerd[740]: time="2022-06-30T05:24:49.487754196Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:49 minikube dockerd[740]: time="2022-06-30T05:24:49.487798665Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:49 minikube dockerd[740]: time="2022-06-30T05:24:49.491307228Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:53 minikube dockerd[740]: time="2022-06-30T05:24:53.439136810Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:53 minikube dockerd[740]: time="2022-06-30T05:24:53.439173386Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:53 minikube dockerd[740]: time="2022-06-30T05:24:53.441959680Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:54 minikube dockerd[740]: time="2022-06-30T05:24:54.450092917Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:54 minikube dockerd[740]: time="2022-06-30T05:24:54.450154782Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:24:54 minikube dockerd[740]: time="2022-06-30T05:24:54.453030573Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:00 minikube dockerd[740]: time="2022-06-30T05:25:00.457101778Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:00 minikube dockerd[740]: time="2022-06-30T05:25:00.457194537Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:00 minikube dockerd[740]: time="2022-06-30T05:25:00.460535925Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:01 minikube dockerd[740]: time="2022-06-30T05:25:01.456783174Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:01 minikube dockerd[740]: time="2022-06-30T05:25:01.456823501Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:01 minikube dockerd[740]: time="2022-06-30T05:25:01.460116790Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.451007034Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.451057441Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.453809772Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.467250022Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.467315804Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:08 minikube dockerd[740]: time="2022-06-30T05:25:08.469478021Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:11 minikube dockerd[740]: time="2022-06-30T05:25:11.457855110Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:11 minikube dockerd[740]: time="2022-06-30T05:25:11.457885888Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:11 minikube dockerd[740]: time="2022-06-30T05:25:11.461099412Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:13 minikube dockerd[740]: time="2022-06-30T05:25:13.444568527Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:13 minikube dockerd[740]: time="2022-06-30T05:25:13.444622013Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:13 minikube dockerd[740]: time="2022-06-30T05:25:13.448171261Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:21 minikube dockerd[740]: time="2022-06-30T05:25:21.558307409Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:21 minikube dockerd[740]: time="2022-06-30T05:25:21.558344331Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:21 minikube dockerd[740]: time="2022-06-30T05:25:21.561216250Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:24 minikube dockerd[740]: time="2022-06-30T05:25:24.436868319Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:24 minikube dockerd[740]: time="2022-06-30T05:25:24.436908101Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:24 minikube dockerd[740]: time="2022-06-30T05:25:24.439750904Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * * ==> describe nodes <== E0630 10:55:24.726172 5532 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **" * * ==> dmesg <== * [ +0.000077] init: (1) ERROR: UtilCreateProcessAndWait:501: /bin/mount failed with status 0x [ +0.000002] ff00 [ +0.000006] init: (1) ERROR: MountPlan9:493: mount cache=mmap,noatime,trans=fd,rfdno=8,wfdno=8,msize=65536,aname=drvfs;path=C:\;uid=0;gid=0;symlinkroot=/mnt/ [Jun30 04:26] WSL2: Performing memory compaction. [Jun30 04:28] WSL2: Performing memory compaction. [Jun30 04:29] WSL2: Performing memory compaction. [Jun30 04:30] WSL2: Performing memory compaction. [Jun30 04:31] WSL2: Performing memory compaction. [Jun30 04:32] WSL2: Performing memory compaction. [Jun30 04:33] WSL2: Performing memory compaction. [Jun30 04:34] WSL2: Performing memory compaction. [Jun30 04:35] WSL2: Performing memory compaction. [Jun30 04:36] WSL2: Performing memory compaction. [Jun30 04:37] WSL2: Performing memory compaction. [Jun30 04:38] WSL2: Performing memory compaction. [Jun30 04:39] WSL2: Performing memory compaction. [Jun30 04:40] WSL2: Performing memory compaction. [Jun30 04:41] WSL2: Performing memory compaction. [Jun30 04:42] WSL2: Performing memory compaction. [Jun30 04:43] WSL2: Performing memory compaction. [Jun30 04:44] WSL2: Performing memory compaction. [Jun30 04:45] WSL2: Performing memory compaction. [Jun30 04:46] WSL2: Performing memory compaction. [Jun30 04:47] WSL2: Performing memory compaction. [Jun30 04:48] WSL2: Performing memory compaction. [Jun30 04:49] WSL2: Performing memory compaction. [Jun30 04:50] WSL2: Performing memory compaction. [Jun30 04:52] WSL2: Performing memory compaction. [Jun30 04:53] WSL2: Performing memory compaction. [Jun30 04:54] WSL2: Performing memory compaction. [Jun30 04:55] WSL2: Performing memory compaction. [Jun30 04:56] WSL2: Performing memory compaction. [Jun30 04:57] WSL2: Performing memory compaction. [Jun30 04:58] WSL2: Performing memory compaction. [Jun30 04:59] WSL2: Performing memory compaction. [Jun30 05:00] WSL2: Performing memory compaction. [Jun30 05:01] WSL2: Performing memory compaction. [Jun30 05:02] WSL2: Performing memory compaction. [Jun30 05:03] WSL2: Performing memory compaction. [Jun30 05:04] WSL2: Performing memory compaction. [Jun30 05:05] WSL2: Performing memory compaction. [Jun30 05:06] WSL2: Performing memory compaction. [Jun30 05:07] WSL2: Performing memory compaction. [Jun30 05:08] WSL2: Performing memory compaction. [Jun30 05:09] WSL2: Performing memory compaction. [Jun30 05:10] WSL2: Performing memory compaction. [Jun30 05:11] WSL2: Performing memory compaction. [Jun30 05:12] WSL2: Performing memory compaction. [Jun30 05:13] WSL2: Performing memory compaction. [Jun30 05:14] WSL2: Performing memory compaction. [Jun30 05:15] WSL2: Performing memory compaction. [Jun30 05:16] WSL2: Performing memory compaction. [Jun30 05:17] WSL2: Performing memory compaction. [Jun30 05:18] WSL2: Performing memory compaction. [Jun30 05:19] WSL2: Performing memory compaction. [Jun30 05:20] WSL2: Performing memory compaction. [Jun30 05:21] WSL2: Performing memory compaction. [Jun30 05:22] WSL2: Performing memory compaction. [Jun30 05:23] WSL2: Performing memory compaction. [Jun30 05:24] WSL2: Performing memory compaction. * * ==> kernel <== * 05:25:24 up 59 min, 0 users, load average: 0.05, 0.04, 0.02 Linux minikube 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.4 LTS" * * ==> kubelet <== * -- Logs begin at Thu 2022-06-30 05:09:13 UTC, end at Thu 2022-06-30 05:25:24 UTC. -- Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.420832 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.521647 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.622358 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.723600 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.824411 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:20 minikube kubelet[3168]: E0630 05:25:20.925545 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.026519 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.128659 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.229635 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.252116 3168 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.330156 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.430711 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.531073 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.561786 3168 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.561858 3168 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-scheduler-minikube" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.561883 3168 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-scheduler-minikube" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.561945 3168 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-minikube_kube-system(bab0508344d11c6fdb45b1f91c440ff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-minikube_kube-system(bab0508344d11c6fdb45b1f91c440ff5)\\\": rpc error: code = Unknown desc = failed pulling image \\\"k8s.gcr.io/pause:3.6\\\": Error response from daemon: Get \\\"https://k8s.gcr.io/v2/\\\": x509: certificate signed by unknown authority\"" pod="kube-system/kube-scheduler-minikube" podUID=bab0508344d11c6fdb45b1f91c440ff5 Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.631331 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.732289 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.833129 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:21 minikube kubelet[3168]: E0630 05:25:21.937430 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.038740 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.140226 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.241013 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.341964 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.442887 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.543745 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.644939 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.746026 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.814515 3168 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.846777 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:22 minikube kubelet[3168]: E0630 05:25:22.947016 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.047681 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.148909 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.249905 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.351082 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.452088 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.552923 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.653462 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.754694 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.855476 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:23 minikube kubelet[3168]: I0630 05:25:23.873029 3168 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.873576 3168 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube" Jun 30 05:25:23 minikube kubelet[3168]: E0630 05:25:23.956384 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.056534 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.156971 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.257814 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.358559 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.440259 3168 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.440327 3168 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-apiserver-minikube" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.440351 3168 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/kube-apiserver-minikube" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.440412 3168 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-minikube_kube-system(6580cebb2d04c6c59385cf58e278b0a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-minikube_kube-system(6580cebb2d04c6c59385cf58e278b0a6)\\\": rpc error: code = Unknown desc = failed pulling image \\\"k8s.gcr.io/pause:3.6\\\": Error response from daemon: Get \\\"https://k8s.gcr.io/v2/\\\": x509: certificate signed by unknown authority\"" pod="kube-system/kube-apiserver-minikube" podUID=6580cebb2d04c6c59385cf58e278b0a6 Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.459503 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.559738 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.660685 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.706759 3168 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.706837 3168 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/etcd-minikube" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.706883 3168 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": x509: certificate signed by unknown authority" pod="kube-system/etcd-minikube" Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.706954 3168 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-minikube_kube-system(906edd533192a4db2396a938662a5271)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-minikube_kube-system(906edd533192a4db2396a938662a5271)\\\": rpc error: code = Unknown desc = failed pulling image \\\"k8s.gcr.io/pause:3.6\\\": Error response from daemon: Get \\\"https://k8s.gcr.io/v2/\\\": x509: certificate signed by unknown authority\"" pod="kube-system/etcd-minikube" podUID=906edd533192a4db2396a938662a5271 Jun 30 05:25:24 minikube kubelet[3168]: E0630 05:25:24.761035 3168 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" ! unable to fetch logs for: describe nodes