* * ==> Audit <== * |---------|-----------------------------------|----------|------------------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-----------------------------------|----------|------------------|---------|-------------------------------|-------------------------------| | addons | list | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:20:25 EDT | Tue, 03 May 2022 20:20:26 EDT | | kubectl | -- get pods -n metallb-system | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:21:27 EDT | Tue, 03 May 2022 20:21:27 EDT | | tunnel | | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:16:32 EDT | Tue, 03 May 2022 20:22:34 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:22:56 EDT | Tue, 03 May 2022 20:22:57 EDT | | kubectl | -- get -A all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:23:03 EDT | Tue, 03 May 2022 20:23:04 EDT | | addons | configure metallb | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:22:02 EDT | Tue, 03 May 2022 20:24:19 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:25:00 EDT | Tue, 03 May 2022 20:25:01 EDT | | kubectl | -- apply -f mongo-express.yaml | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:26:23 EDT | Tue, 03 May 2022 20:26:23 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:26:32 EDT | Tue, 03 May 2022 20:26:33 EDT | | kubectl | -- delete service | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:26:52 EDT | Tue, 03 May 2022 20:26:53 EDT | | | mongo-express-service | | | | | | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:26:55 EDT | Tue, 03 May 2022 20:26:56 EDT | | kubectl | -- apply -f mongo-express.yaml | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:27:02 EDT | Tue, 03 May 2022 20:27:03 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:27:06 EDT | Tue, 03 May 2022 20:27:07 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:28:10 EDT | Tue, 03 May 2022 20:28:11 EDT | | tunnel | | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:27:44 EDT | Tue, 03 May 2022 20:28:34 EDT | | kubectl | -- apply -f mongo-express.yaml | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:30:38 EDT | Tue, 03 May 2022 20:30:39 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:30:42 EDT | Tue, 03 May 2022 20:30:43 EDT | | kubectl | -- delete service | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:30:54 EDT | Tue, 03 May 2022 20:30:55 EDT | | | mongo-express-service | | | | | | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:30:57 EDT | Tue, 03 May 2022 20:30:58 EDT | | kubectl | -- apply -f mongo-express.yaml | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:31:04 EDT | Tue, 03 May 2022 20:31:05 EDT | | kubectl | -- get all -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:31:09 EDT | Tue, 03 May 2022 20:31:10 EDT | | kubectl | -- describe configmap config | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:32:56 EDT | Tue, 03 May 2022 20:32:56 EDT | | | -n metallb-system | | | | | | | kubectl | -- apply -f | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:36:37 EDT | Tue, 03 May 2022 20:36:38 EDT | | | hello-whale-blue.yaml | | | | | | | kubectl | -- get svc | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:36:51 EDT | Tue, 03 May 2022 20:36:52 EDT | | kubectl | -- logs -l component=speaker | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:40:07 EDT | Tue, 03 May 2022 20:40:08 EDT | | | -n metallb-system | | | | | | | kubectl | -- logs -l component=speaker | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:41:36 EDT | Tue, 03 May 2022 20:41:37 EDT | | | -n metallb-system | | | | | | | kubectl | -- logs -l component=speaker | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 20:41:58 EDT | Tue, 03 May 2022 20:41:59 EDT | | | -n metallb-system | | | | | | | stop | | minikube | MCCARTNEY\anowak | v1.25.2 | Tue, 03 May 2022 22:09:55 EDT | Tue, 03 May 2022 22:10:09 EDT | | start | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:03:08 EDT | Wed, 04 May 2022 09:03:41 EDT | | kubectl | -- apply -f nginx.yaml | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:04:05 EDT | Wed, 04 May 2022 09:04:06 EDT | | tunnel | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:04:15 EDT | Wed, 04 May 2022 09:05:05 EDT | | kubectl | -- get svc -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:05:22 EDT | Wed, 04 May 2022 09:05:23 EDT | | kubectl | -- delete service | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:06:32 EDT | Wed, 04 May 2022 09:06:33 EDT | | | hello-blue-whale-svc | | | | | | | | mongo-express-service | | | | | | | kubectl | -- get svc -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:06:35 EDT | Wed, 04 May 2022 09:06:36 EDT | | kubectl | -- get service -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:07:26 EDT | Wed, 04 May 2022 09:07:27 EDT | | tunnel | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:06:56 EDT | Wed, 04 May 2022 09:07:53 EDT | | kubectl | -- create deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:08:05 EDT | Wed, 04 May 2022 09:08:06 EDT | | | hello-minikube | | | | | | | | --image=k8s.gcr.io/echoserver:1.4 | | | | | | | kubectl | -- expose deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:08:15 EDT | Wed, 04 May 2022 09:08:15 EDT | | | hello-minikube --type=NodePort | | | | | | | | --port=8080 | | | | | | | service | hello-minikube | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:08:24 EDT | Wed, 04 May 2022 09:09:23 EDT | | delete | --all | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:09:41 EDT | Wed, 04 May 2022 09:09:55 EDT | | start | --kubernetes-version=latest | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:10:06 EDT | Wed, 04 May 2022 09:11:28 EDT | | kubectl | -- create deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:12:43 EDT | Wed, 04 May 2022 09:12:48 EDT | | | hello-minikube | | | | | | | | --image=k8s.gcr.io/echoserver:1.4 | | | | | | | kubectl | -- expose deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:13:12 EDT | Wed, 04 May 2022 09:13:13 EDT | | | hello-minikube --type=NodePort | | | | | | | | --port=8080 | | | | | | | logs | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:16:13 EDT | Wed, 04 May 2022 09:16:16 EDT | | logs | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:17:34 EDT | Wed, 04 May 2022 09:17:38 EDT | | tunnel | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:35:42 EDT | Wed, 04 May 2022 09:35:58 EDT | | --help | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:36:16 EDT | Wed, 04 May 2022 09:36:16 EDT | | logs | --help | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:36:32 EDT | Wed, 04 May 2022 09:36:32 EDT | | service | hello-minikube | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:13:21 EDT | Wed, 04 May 2022 09:36:51 EDT | | service | hello-minikube | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:37:05 EDT | Wed, 04 May 2022 09:37:49 EDT | | stop | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:37:58 EDT | Wed, 04 May 2022 09:38:13 EDT | | delete | --all | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:38:20 EDT | Wed, 04 May 2022 09:38:32 EDT | | delete | --all | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:39:19 EDT | Wed, 04 May 2022 09:39:23 EDT | | start | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 09:41:57 EDT | Wed, 04 May 2022 09:42:50 EDT | | start | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 11:15:16 EDT | Wed, 04 May 2022 11:15:36 EDT | | kubectl | -- create deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 12:30:32 EDT | Wed, 04 May 2022 12:30:33 EDT | | | hello-minikube | | | | | | | | --image=k8s.gcr.io/echoserver:1.4 | | | | | | | kubectl | -- expose deployment | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 12:31:04 EDT | Wed, 04 May 2022 12:31:05 EDT | | | hello-minikube --type=NodePort | | | | | | | | --port=8080 | | | | | | | service | hello-minikube | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 12:31:36 EDT | Wed, 04 May 2022 12:32:08 EDT | | logs | | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 12:32:25 EDT | Wed, 04 May 2022 12:32:29 EDT | | kubectl | -- get service -o wide | minikube | MCCARTNEY\anowak | v1.25.2 | Wed, 04 May 2022 12:59:57 EDT | Wed, 04 May 2022 12:59:58 EDT | |---------|-----------------------------------|----------|------------------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/05/04 11:15:16 Running on machine: McCartney Binary: Built with gc go1.17.7 for windows/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0504 11:15:16.182171 26896 out.go:297] Setting OutFile to fd 844 ... I0504 11:15:16.182171 26896 out.go:344] TERM=xterm,COLORTERM=, which probably does not support color I0504 11:15:16.182671 26896 out.go:310] Setting ErrFile to fd 844... I0504 11:15:16.182671 26896 out.go:344] TERM=xterm,COLORTERM=, which probably does not support color I0504 11:15:16.193171 26896 out.go:304] Setting JSON to false I0504 11:15:16.199670 26896 start.go:112] hostinfo: {"hostname":"McCartney","uptime":8296,"bootTime":1651669020,"procs":321,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.19043 Build 19043","kernelVersion":"10.0.19043 Build 19043","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"1b085deb-f969-4a20-869b-ad9b2f07b045"} W0504 11:15:16.200171 26896 start.go:120] gopshost.Virtualization returned error: not implemented yet I0504 11:15:16.201671 26896 out.go:176] * minikube v1.25.2 on Microsoft Windows 10 Pro 10.0.19043 Build 19043 I0504 11:15:16.201671 26896 notify.go:193] Checking for updates... I0504 11:15:16.206171 26896 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0504 11:15:16.206171 26896 driver.go:344] Setting default libvirt URI to qemu:///system I0504 11:15:18.482610 26896 docker.go:132] docker version: linux-20.10.14 I0504 11:15:18.489635 26896 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0504 11:15:20.575522 26896 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.0858864s) I0504 11:15:20.575522 26896 info.go:263] docker info: {ID:LUUE:PIW2:NNFH:RAHA:33FT:JK5N:DK7Z:LXQ7:BDTN:C34J:UTRV:WA3Q Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:33 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-04 15:15:19.0439828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26687606784 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.4.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0504 11:15:20.578018 26896 out.go:176] * Using the docker driver based on existing profile I0504 11:15:20.578018 26896 start.go:281] selected driver: docker I0504 11:15:20.578018 26896 start.go:798] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\anowak:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0504 11:15:20.578464 26896 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0504 11:15:20.595627 26896 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0504 11:15:21.722740 26896 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (1.1271135s) I0504 11:15:21.722740 26896 info.go:263] docker info: {ID:LUUE:PIW2:NNFH:RAHA:33FT:JK5N:DK7Z:LXQ7:BDTN:C34J:UTRV:WA3Q Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:33 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-04 15:15:21.1534018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26687606784 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.4.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0504 11:15:21.748100 26896 cni.go:93] Creating CNI manager for "" I0504 11:15:21.748100 26896 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0504 11:15:21.748100 26896 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\anowak:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0504 11:15:21.749100 26896 out.go:176] * Starting control plane node minikube in cluster minikube I0504 11:15:21.749746 26896 cache.go:120] Beginning downloading kic base image for docker with docker I0504 11:15:21.751100 26896 out.go:176] * Pulling base image ... I0504 11:15:21.751600 26896 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0504 11:15:21.751600 26896 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0504 11:15:21.751600 26896 preload.go:148] Found local preload: C:\Users\anowak\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0504 11:15:21.751600 26896 cache.go:57] Caching tarball of preloaded images I0504 11:15:21.752600 26896 preload.go:174] Found C:\Users\anowak\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0504 11:15:21.753118 26896 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on docker I0504 11:15:21.753118 26896 profile.go:148] Saving config to C:\Users\anowak\.minikube\profiles\minikube\config.json ... I0504 11:15:22.353648 26896 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0504 11:15:22.353648 26896 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0504 11:15:22.353648 26896 cache.go:208] Successfully downloaded all kic artifacts I0504 11:15:22.354136 26896 start.go:313] acquiring machines lock for minikube: {Name:mk4b33aceced6958d066114370da0faab4c18e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0504 11:15:22.354136 26896 start.go:317] acquired machines lock for "minikube" in 0s I0504 11:15:22.354136 26896 start.go:93] Skipping create...Using existing machine configuration I0504 11:15:22.354136 26896 fix.go:55] fixHost starting: I0504 11:15:22.368602 26896 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0504 11:15:22.946180 26896 fix.go:108] recreateIfNeeded on minikube: state=Running err= W0504 11:15:22.946180 26896 fix.go:134] unexpected machine state, will restart: I0504 11:15:22.947879 26896 out.go:176] * Updating the running docker "minikube" container ... I0504 11:15:22.947879 26896 machine.go:88] provisioning docker machine ... I0504 11:15:22.947879 26896 ubuntu.go:169] provisioning hostname "minikube" I0504 11:15:22.954706 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:23.541283 26896 main.go:130] libmachine: Using SSH client type: native I0504 11:15:23.545074 26896 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xfcbfa0] 0xfcee60 [] 0s} 127.0.0.1 61886 } I0504 11:15:23.545074 26896 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0504 11:15:23.642633 26896 main.go:130] libmachine: SSH cmd err, output: : minikube I0504 11:15:23.650154 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:24.239209 26896 main.go:130] libmachine: Using SSH client type: native I0504 11:15:24.239209 26896 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xfcbfa0] 0xfcee60 [] 0s} 127.0.0.1 61886 } I0504 11:15:24.239209 26896 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0504 11:15:24.368585 26896 main.go:130] libmachine: SSH cmd err, output: : I0504 11:15:24.368585 26896 ubuntu.go:175] set auth options {CertDir:C:\Users\anowak\.minikube CaCertPath:C:\Users\anowak\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\anowak\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\anowak\.minikube\machines\server.pem ServerKeyPath:C:\Users\anowak\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\anowak\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\anowak\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\anowak\.minikube} I0504 11:15:24.368585 26896 ubuntu.go:177] setting up certificates I0504 11:15:24.368585 26896 provision.go:83] configureAuth start I0504 11:15:24.381667 26896 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0504 11:15:24.962848 26896 provision.go:138] copyHostCerts I0504 11:15:24.966199 26896 exec_runner.go:144] found C:\Users\anowak\.minikube/ca.pem, removing ... I0504 11:15:24.966199 26896 exec_runner.go:207] rm: C:\Users\anowak\.minikube\ca.pem I0504 11:15:24.966199 26896 exec_runner.go:151] cp: C:\Users\anowak\.minikube\certs\ca.pem --> C:\Users\anowak\.minikube/ca.pem (1078 bytes) I0504 11:15:24.970200 26896 exec_runner.go:144] found C:\Users\anowak\.minikube/cert.pem, removing ... I0504 11:15:24.970200 26896 exec_runner.go:207] rm: C:\Users\anowak\.minikube\cert.pem I0504 11:15:24.970669 26896 exec_runner.go:151] cp: C:\Users\anowak\.minikube\certs\cert.pem --> C:\Users\anowak\.minikube/cert.pem (1123 bytes) I0504 11:15:24.974670 26896 exec_runner.go:144] found C:\Users\anowak\.minikube/key.pem, removing ... I0504 11:15:24.974670 26896 exec_runner.go:207] rm: C:\Users\anowak\.minikube\key.pem I0504 11:15:24.974670 26896 exec_runner.go:151] cp: C:\Users\anowak\.minikube\certs\key.pem --> C:\Users\anowak\.minikube/key.pem (1679 bytes) I0504 11:15:24.974670 26896 provision.go:112] generating server cert: C:\Users\anowak\.minikube\machines\server.pem ca-key=C:\Users\anowak\.minikube\certs\ca.pem private-key=C:\Users\anowak\.minikube\certs\ca-key.pem org=anowak.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0504 11:15:25.106412 26896 provision.go:172] copyRemoteCerts I0504 11:15:25.116911 26896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0504 11:15:25.123412 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:25.703086 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:25.805330 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes) I0504 11:15:25.835334 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes) I0504 11:15:25.865877 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0504 11:15:25.895103 26896 provision.go:86] duration metric: configureAuth took 1.5265179s I0504 11:15:25.895103 26896 ubuntu.go:193] setting minikube options for container-runtime I0504 11:15:25.895604 26896 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0504 11:15:25.908638 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:26.493919 26896 main.go:130] libmachine: Using SSH client type: native I0504 11:15:26.493919 26896 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xfcbfa0] 0xfcee60 [] 0s} 127.0.0.1 61886 } I0504 11:15:26.493919 26896 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0504 11:15:26.576274 26896 main.go:130] libmachine: SSH cmd err, output: : overlay I0504 11:15:26.576274 26896 ubuntu.go:71] root file system type: overlay I0504 11:15:26.576274 26896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0504 11:15:26.583759 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:27.162895 26896 main.go:130] libmachine: Using SSH client type: native I0504 11:15:27.162895 26896 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xfcbfa0] 0xfcee60 [] 0s} 127.0.0.1 61886 } I0504 11:15:27.163399 26896 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0504 11:15:27.302714 26896 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0504 11:15:27.309738 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:27.921734 26896 main.go:130] libmachine: Using SSH client type: native I0504 11:15:27.922208 26896 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xfcbfa0] 0xfcee60 [] 0s} 127.0.0.1 61886 } I0504 11:15:27.922208 26896 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0504 11:15:28.049583 26896 main.go:130] libmachine: SSH cmd err, output: : I0504 11:15:28.049583 26896 machine.go:91] provisioned docker machine in 5.1017043s I0504 11:15:28.049583 26896 start.go:267] post-start starting for "minikube" (driver="docker") I0504 11:15:28.049583 26896 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0504 11:15:28.060582 26896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0504 11:15:28.067585 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:28.668837 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:28.715239 26896 ssh_runner.go:195] Run: cat /etc/os-release I0504 11:15:28.718238 26896 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0504 11:15:28.718238 26896 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0504 11:15:28.718238 26896 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0504 11:15:28.718238 26896 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0504 11:15:28.718238 26896 filesync.go:126] Scanning C:\Users\anowak\.minikube\addons for local assets ... I0504 11:15:28.718739 26896 filesync.go:126] Scanning C:\Users\anowak\.minikube\files for local assets ... I0504 11:15:28.718739 26896 start.go:270] post-start completed in 669.156ms I0504 11:15:28.719238 26896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0504 11:15:28.726238 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:29.348279 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:29.390737 26896 fix.go:57] fixHost completed within 7.0366013s I0504 11:15:29.390737 26896 start.go:80] releasing machines lock for "minikube", held for 7.0366013s I0504 11:15:29.398266 26896 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0504 11:15:30.016738 26896 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0504 11:15:30.024737 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:30.028739 26896 ssh_runner.go:195] Run: systemctl --version I0504 11:15:30.036834 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:30.646313 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:30.657765 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:30.996885 26896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0504 11:15:31.019680 26896 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0504 11:15:31.029679 26896 cruntime.go:272] skipping containerd shutdown because we are bound to it I0504 11:15:31.041179 26896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0504 11:15:31.050504 26896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0504 11:15:31.073530 26896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0504 11:15:31.193696 26896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0504 11:15:31.307697 26896 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0504 11:15:31.328697 26896 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0504 11:15:31.430097 26896 ssh_runner.go:195] Run: sudo systemctl start docker I0504 11:15:31.446549 26896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0504 11:15:31.486838 26896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0504 11:15:31.519646 26896 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... I0504 11:15:31.526646 26896 cli_runner.go:133] Run: docker exec -t minikube dig +short host.docker.internal I0504 11:15:32.243137 26896 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0504 11:15:32.243668 26896 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0504 11:15:32.254659 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0504 11:15:32.869235 26896 out.go:176] - kubelet.housekeeping-interval=5m I0504 11:15:32.870234 26896 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0504 11:15:32.877262 26896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0504 11:15:32.904737 26896 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0504 11:15:32.904737 26896 docker.go:537] Images already preloaded, skipping extraction I0504 11:15:32.912236 26896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0504 11:15:32.938236 26896 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0504 11:15:32.938236 26896 cache_images.go:84] Images are preloaded, skipping loading I0504 11:15:32.947768 26896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0504 11:15:33.015688 26896 cni.go:93] Creating CNI manager for "" I0504 11:15:33.015688 26896 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0504 11:15:33.015688 26896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0504 11:15:33.015688 26896 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0504 11:15:33.015688 26896 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0504 11:15:33.015688 26896 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0504 11:15:33.027188 26896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3 I0504 11:15:33.035189 26896 binaries.go:44] Found k8s binaries, skipping transfer I0504 11:15:33.046188 26896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0504 11:15:33.054658 26896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0504 11:15:33.065659 26896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0504 11:15:33.077659 26896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0504 11:15:33.090158 26896 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0504 11:15:33.093658 26896 certs.go:54] Setting up C:\Users\anowak\.minikube\profiles\minikube for IP: 192.168.49.2 I0504 11:15:33.097662 26896 certs.go:182] skipping minikubeCA CA generation: C:\Users\anowak\.minikube\ca.key I0504 11:15:33.103159 26896 certs.go:182] skipping proxyClientCA CA generation: C:\Users\anowak\.minikube\proxy-client-ca.key I0504 11:15:33.103659 26896 certs.go:298] skipping minikube-user signed cert generation: C:\Users\anowak\.minikube\profiles\minikube\client.key I0504 11:15:33.109659 26896 certs.go:298] skipping minikube signed cert generation: C:\Users\anowak\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 I0504 11:15:33.115465 26896 certs.go:298] skipping aggregator signed cert generation: C:\Users\anowak\.minikube\profiles\minikube\proxy-client.key I0504 11:15:33.117310 26896 certs.go:388] found cert: C:\Users\anowak\.minikube\certs\C:\Users\anowak\.minikube\certs\ca-key.pem (1679 bytes) I0504 11:15:33.117310 26896 certs.go:388] found cert: C:\Users\anowak\.minikube\certs\C:\Users\anowak\.minikube\certs\ca.pem (1078 bytes) I0504 11:15:33.117310 26896 certs.go:388] found cert: C:\Users\anowak\.minikube\certs\C:\Users\anowak\.minikube\certs\cert.pem (1123 bytes) I0504 11:15:33.117310 26896 certs.go:388] found cert: C:\Users\anowak\.minikube\certs\C:\Users\anowak\.minikube\certs\key.pem (1679 bytes) I0504 11:15:33.117808 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0504 11:15:33.135819 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0504 11:15:33.152820 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0504 11:15:33.169194 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0504 11:15:33.185636 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0504 11:15:33.202138 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0504 11:15:33.219639 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0504 11:15:33.237638 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0504 11:15:33.253138 26896 ssh_runner.go:362] scp C:\Users\anowak\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0504 11:15:33.269673 26896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0504 11:15:33.282089 26896 ssh_runner.go:195] Run: openssl version I0504 11:15:33.297614 26896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0504 11:15:33.306658 26896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0504 11:15:33.309664 26896 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 24 21:27 /usr/share/ca-certificates/minikubeCA.pem I0504 11:15:33.310158 26896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0504 11:15:33.325659 26896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0504 11:15:33.333826 26896 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\anowak:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0504 11:15:33.341270 26896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0504 11:15:33.376108 26896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0504 11:15:33.384080 26896 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0504 11:15:33.384080 26896 kubeadm.go:601] restartCluster start I0504 11:15:33.395616 26896 ssh_runner.go:195] Run: sudo test -d /data/minikube I0504 11:15:33.403580 26896 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0504 11:15:33.411079 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0504 11:15:34.017974 26896 kubeconfig.go:92] found "minikube" server: "https://127.0.0.1:61890" I0504 11:15:34.030508 26896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0504 11:15:34.038972 26896 api_server.go:165] Checking apiserver status ... I0504 11:15:34.050008 26896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0504 11:15:34.076004 26896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1718/cgroup I0504 11:15:34.084043 26896 api_server.go:181] apiserver freezer: "20:freezer:/docker/16773aa35a1803e5057b81c38bfd0dca46e48692f1b82fcd793523861c8db4ca/kubepods/burstable/podcd6e47233d36a9715b0ab9632f871843/f26dc45d31bc31f34590f4a283fcfb26e621b59e91ae442d77f4028692911494" I0504 11:15:34.096042 26896 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/16773aa35a1803e5057b81c38bfd0dca46e48692f1b82fcd793523861c8db4ca/kubepods/burstable/podcd6e47233d36a9715b0ab9632f871843/f26dc45d31bc31f34590f4a283fcfb26e621b59e91ae442d77f4028692911494/freezer.state I0504 11:15:34.103725 26896 api_server.go:203] freezer state: "THAWED" I0504 11:15:34.104198 26896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61890/healthz ... I0504 11:15:34.109697 26896 api_server.go:266] https://127.0.0.1:61890/healthz returned 200: ok I0504 11:15:34.119723 26896 system_pods.go:86] 7 kube-system pods found I0504 11:15:34.119723 26896 system_pods.go:89] "coredns-64897985d-chc5l" [b3ac62d6-970a-4dc7-9648-8f5f9173f28c] Running I0504 11:15:34.119723 26896 system_pods.go:89] "etcd-minikube" [469b0eb6-28d5-4d08-9292-60efdc68f6da] Running I0504 11:15:34.119723 26896 system_pods.go:89] "kube-apiserver-minikube" [27ba0317-05b5-4054-a118-55452ac70857] Running I0504 11:15:34.119723 26896 system_pods.go:89] "kube-controller-manager-minikube" [bdbad043-8612-432e-a041-675c156c5749] Running I0504 11:15:34.119723 26896 system_pods.go:89] "kube-proxy-wm9kq" [a34f43fe-ceae-4e08-b9b6-498ae628e00c] Running I0504 11:15:34.119723 26896 system_pods.go:89] "kube-scheduler-minikube" [ad82544f-1f8b-4132-8a93-315b0569f19e] Running I0504 11:15:34.119723 26896 system_pods.go:89] "storage-provisioner" [cf01efd3-bc4a-4955-bdf1-4a72560a99e4] Running I0504 11:15:34.121197 26896 api_server.go:140] control plane version: v1.23.3 I0504 11:15:34.121197 26896 kubeadm.go:595] The running cluster does not require reconfiguration: 127.0.0.1 I0504 11:15:34.121197 26896 kubeadm.go:649] Taking a shortcut, as the cluster seems to be properly configured I0504 11:15:34.121197 26896 kubeadm.go:605] restartCluster took 737.1178ms I0504 11:15:34.121197 26896 kubeadm.go:393] StartCluster complete in 787.3718ms I0504 11:15:34.121197 26896 settings.go:142] acquiring lock: {Name:mk07bba5fdc890d7d27c4fa1054f07df3d02fa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0504 11:15:34.121197 26896 settings.go:150] Updating kubeconfig: C:\Users\anowak\.kube\config I0504 11:15:34.122730 26896 lock.go:35] WriteFile acquiring C:\Users\anowak\.kube\config: {Name:mke188224f0d40ef95f9ea34cf2f8892377c168f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0504 11:15:34.130697 26896 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0504 11:15:34.130697 26896 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0504 11:15:34.130697 26896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0504 11:15:34.132698 26896 out.go:176] * Verifying Kubernetes components... I0504 11:15:34.130697 26896 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[] I0504 11:15:34.131196 26896 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0504 11:15:34.132698 26896 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0504 11:15:34.132698 26896 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0504 11:15:34.132698 26896 addons.go:165] addon storage-provisioner should already be in state true I0504 11:15:34.132698 26896 addons.go:65] Setting default-storageclass=true in profile "minikube" I0504 11:15:34.132698 26896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0504 11:15:34.133198 26896 host.go:66] Checking if "minikube" exists ... I0504 11:15:34.146701 26896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0504 11:15:34.151199 26896 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0504 11:15:34.151199 26896 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0504 11:15:34.365196 26896 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping... I0504 11:15:34.372698 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0504 11:15:34.783224 26896 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0504 11:15:34.783224 26896 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0504 11:15:34.783224 26896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0504 11:15:34.788698 26896 addons.go:153] Setting addon default-storageclass=true in "minikube" W0504 11:15:34.788698 26896 addons.go:165] addon default-storageclass should already be in state true I0504 11:15:34.789197 26896 host.go:66] Checking if "minikube" exists ... I0504 11:15:34.790724 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:34.804699 26896 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0504 11:15:35.016732 26896 api_server.go:51] waiting for apiserver process to appear ... I0504 11:15:35.027241 26896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0504 11:15:35.043198 26896 api_server.go:71] duration metric: took 912.5008ms to wait for apiserver process to appear ... I0504 11:15:35.043198 26896 api_server.go:87] waiting for apiserver healthz status ... I0504 11:15:35.043198 26896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61890/healthz ... I0504 11:15:35.050197 26896 api_server.go:266] https://127.0.0.1:61890/healthz returned 200: ok I0504 11:15:35.051698 26896 api_server.go:140] control plane version: v1.23.3 I0504 11:15:35.051698 26896 api_server.go:130] duration metric: took 8.5002ms to wait for apiserver health ... I0504 11:15:35.051698 26896 system_pods.go:43] waiting for kube-system pods to appear ... I0504 11:15:35.055732 26896 system_pods.go:59] 7 kube-system pods found I0504 11:15:35.055732 26896 system_pods.go:61] "coredns-64897985d-chc5l" [b3ac62d6-970a-4dc7-9648-8f5f9173f28c] Running I0504 11:15:35.055732 26896 system_pods.go:61] "etcd-minikube" [469b0eb6-28d5-4d08-9292-60efdc68f6da] Running I0504 11:15:35.055732 26896 system_pods.go:61] "kube-apiserver-minikube" [27ba0317-05b5-4054-a118-55452ac70857] Running I0504 11:15:35.055732 26896 system_pods.go:61] "kube-controller-manager-minikube" [bdbad043-8612-432e-a041-675c156c5749] Running I0504 11:15:35.055732 26896 system_pods.go:61] "kube-proxy-wm9kq" [a34f43fe-ceae-4e08-b9b6-498ae628e00c] Running I0504 11:15:35.055732 26896 system_pods.go:61] "kube-scheduler-minikube" [ad82544f-1f8b-4132-8a93-315b0569f19e] Running I0504 11:15:35.055732 26896 system_pods.go:61] "storage-provisioner" [cf01efd3-bc4a-4955-bdf1-4a72560a99e4] Running I0504 11:15:35.055732 26896 system_pods.go:74] duration metric: took 4.0339ms to wait for pod list to return data ... I0504 11:15:35.055732 26896 kubeadm.go:548] duration metric: took 925.0349ms to wait for : map[apiserver:true system_pods:true] ... I0504 11:15:35.055732 26896 node_conditions.go:102] verifying NodePressure condition ... I0504 11:15:35.058198 26896 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki I0504 11:15:35.058198 26896 node_conditions.go:123] node cpu capacity is 16 I0504 11:15:35.058198 26896 node_conditions.go:105] duration metric: took 2.4659ms to run NodePressure ... I0504 11:15:35.058198 26896 start.go:213] waiting for startup goroutines ... I0504 11:15:35.420700 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:35.426698 26896 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0504 11:15:35.426698 26896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0504 11:15:35.434199 26896 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0504 11:15:35.482198 26896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0504 11:15:36.054239 26896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61886 SSHKeyPath:C:\Users\anowak\.minikube\machines\minikube\id_rsa Username:docker} I0504 11:15:36.115695 26896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0504 11:15:36.203195 26896 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0504 11:15:36.203195 26896 addons.go:417] enableAddons completed in 2.0724978s I0504 11:15:36.681236 26896 start.go:496] kubectl: 1.22.5, cluster: 1.23.3 (minor skew: 1) I0504 11:15:36.682196 26896 out.go:176] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Wed 2022-05-04 13:42:21 UTC, end at Wed 2022-05-04 17:15:13 UTC. -- May 04 13:42:21 minikube systemd[1]: Starting Docker Application Container Engine... May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.323349500Z" level=info msg="Starting up" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.324586500Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.324615000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.324633000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.324640600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.325719400Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.325746300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.325758500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.325765900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.335873600Z" level=info msg="[graphdriver] using prior storage driver: overlay2" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362285500Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362316400Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362323700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362327400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362331300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362334600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.362471800Z" level=info msg="Loading containers: start." May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.420070200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.452089300Z" level=info msg="Loading containers: done." May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.465660800Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.465766100Z" level=info msg="Daemon has completed initialization" May 04 13:42:21 minikube systemd[1]: Started Docker Application Container Engine. May 04 13:42:21 minikube dockerd[210]: time="2022-05-04T13:42:21.505650100Z" level=info msg="API listen on /run/docker.sock" May 04 13:42:30 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. May 04 13:42:31 minikube systemd[1]: Stopping Docker Application Container Engine... May 04 13:42:31 minikube dockerd[210]: time="2022-05-04T13:42:31.168470200Z" level=info msg="Processing signal 'terminated'" May 04 13:42:31 minikube dockerd[210]: time="2022-05-04T13:42:31.169186900Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby May 04 13:42:31 minikube dockerd[210]: time="2022-05-04T13:42:31.169558200Z" level=info msg="Daemon shutdown complete" May 04 13:42:31 minikube systemd[1]: docker.service: Succeeded. May 04 13:42:31 minikube systemd[1]: Stopped Docker Application Container Engine. May 04 13:42:31 minikube systemd[1]: Starting Docker Application Container Engine... May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.204453500Z" level=info msg="Starting up" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.206509900Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.206534800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.206551800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.206558500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.207695400Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.207745100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.207802300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.207845500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.216999800Z" level=info msg="[graphdriver] using prior storage driver: overlay2" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224628900Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224653200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224660300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224673900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224677500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224680800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.224827800Z" level=info msg="Loading containers: start." May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.283910700Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.309372700Z" level=info msg="Loading containers: done." May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.323256900Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.323313400Z" level=info msg="Daemon has completed initialization" May 04 13:42:31 minikube systemd[1]: Started Docker Application Container Engine. May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.342793200Z" level=info msg="API listen on [::]:2376" May 04 13:42:31 minikube dockerd[476]: time="2022-05-04T13:42:31.345362100Z" level=info msg="API listen on /var/run/docker.sock" May 04 16:30:35 minikube dockerd[476]: time="2022-05-04T16:30:35.454540900Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4" May 04 16:30:35 minikube dockerd[476]: time="2022-05-04T16:30:35.694628100Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 1e4c83f46ab2a k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb 44 minutes ago Running echoserver 0 4d8438eb3f601 5aa50d464642f 7801cfc6d5c07 45 minutes ago Running dashboard-metrics-scraper 0 f4f4187697f1c 8ba775eddb72f e1482a24335a6 45 minutes ago Running kubernetes-dashboard 0 2418c498942f7 08462ed50b1f2 6e38f40d628db 4 hours ago Running storage-provisioner 0 61727173876ce a6c9a027fd48d a4ca41631cc7a 4 hours ago Running coredns 0 94bfde48758ba 14c48a985242d 9b7cc99821098 4 hours ago Running kube-proxy 0 345d1b5b1e86d e7c5921c99d7a b07520cd7ab76 4 hours ago Running kube-controller-manager 0 2c0d9982a5559 4891658d1211c 25f8c7f3da61c 4 hours ago Running etcd 0 1e94330145620 f26dc45d31bc3 f40be0088a83e 4 hours ago Running kube-apiserver 0 4e65c71e5e396 38436ca4e1271 99a3486be4f28 4 hours ago Running kube-scheduler 0 c37749ece5658 * * ==> coredns [a6c9a027fd48] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_05_04T09_42_47_0700 minikube.k8s.io/version=v1.25.2 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 04 May 2022 13:42:44 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 04 May 2022 17:15:08 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 04 May 2022 17:11:47 +0000 Wed, 04 May 2022 13:42:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 04 May 2022 17:11:47 +0000 Wed, 04 May 2022 13:42:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 04 May 2022 17:11:47 +0000 Wed, 04 May 2022 13:42:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 04 May 2022 17:11:47 +0000 Wed, 04 May 2022 13:42:58 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 16 ephemeral-storage: 263174212Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 26062116Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 263174212Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 26062116Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b6a262faae404a5db719705fd34b5c8b Boot ID: 59e86b22-a23e-44cf-8f5c-2b6556ea197c Kernel Version: 5.10.102.1-microsoft-standard-WSL2 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (10 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default hello-minikube-7bc9d7884c-cmznw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 44m kube-system coredns-64897985d-chc5l 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3h32m kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kube-system kube-apiserver-minikube 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kube-system kube-controller-manager-minikube 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kube-system kube-proxy-wm9kq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3h32m kubernetes-dashboard dashboard-metrics-scraper-58549894f-6p6tz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45m kubernetes-dashboard kubernetes-dashboard-ccd587f44-9tcmb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (4%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning listen tcp4 :32473: bind: address already in use 44m kube-proxy can't open port "nodePort for default/hello-minikube" (:32473/tcp4), skipping it * * ==> dmesg <== * [May 4 16:14] WSL2: Performing memory compaction. [May 4 16:15] WSL2: Performing memory compaction. [May 4 16:16] WSL2: Performing memory compaction. [May 4 16:17] WSL2: Performing memory compaction. [May 4 16:18] WSL2: Performing memory compaction. [May 4 16:19] WSL2: Performing memory compaction. [May 4 16:20] WSL2: Performing memory compaction. [May 4 16:21] WSL2: Performing memory compaction. [May 4 16:22] WSL2: Performing memory compaction. [May 4 16:23] WSL2: Performing memory compaction. [May 4 16:24] WSL2: Performing memory compaction. [May 4 16:25] WSL2: Performing memory compaction. [May 4 16:26] WSL2: Performing memory compaction. [May 4 16:27] WSL2: Performing memory compaction. [May 4 16:28] WSL2: Performing memory compaction. [May 4 16:29] WSL2: Performing memory compaction. [May 4 16:30] WSL2: Performing memory compaction. [May 4 16:31] WSL2: Performing memory compaction. [May 4 16:32] WSL2: Performing memory compaction. [May 4 16:33] WSL2: Performing memory compaction. [May 4 16:34] WSL2: Performing memory compaction. [May 4 16:35] WSL2: Performing memory compaction. [May 4 16:36] WSL2: Performing memory compaction. [May 4 16:37] WSL2: Performing memory compaction. [May 4 16:38] WSL2: Performing memory compaction. [May 4 16:39] WSL2: Performing memory compaction. [May 4 16:41] WSL2: Performing memory compaction. [May 4 16:42] WSL2: Performing memory compaction. [May 4 16:43] WSL2: Performing memory compaction. [May 4 16:44] WSL2: Performing memory compaction. [May 4 16:45] WSL2: Performing memory compaction. [May 4 16:46] WSL2: Performing memory compaction. [May 4 16:47] WSL2: Performing memory compaction. [May 4 16:48] WSL2: Performing memory compaction. [May 4 16:49] WSL2: Performing memory compaction. [May 4 16:50] WSL2: Performing memory compaction. [May 4 16:51] WSL2: Performing memory compaction. [May 4 16:52] WSL2: Performing memory compaction. [May 4 16:53] WSL2: Performing memory compaction. [May 4 16:54] WSL2: Performing memory compaction. [May 4 16:55] WSL2: Performing memory compaction. [May 4 16:56] WSL2: Performing memory compaction. [May 4 16:57] WSL2: Performing memory compaction. [May 4 16:58] WSL2: Performing memory compaction. [May 4 16:59] WSL2: Performing memory compaction. [May 4 17:00] WSL2: Performing memory compaction. [May 4 17:01] WSL2: Performing memory compaction. [May 4 17:02] WSL2: Performing memory compaction. [May 4 17:03] WSL2: Performing memory compaction. [May 4 17:04] WSL2: Performing memory compaction. [May 4 17:05] WSL2: Performing memory compaction. [May 4 17:06] WSL2: Performing memory compaction. [May 4 17:07] WSL2: Performing memory compaction. [May 4 17:08] WSL2: Performing memory compaction. [May 4 17:09] WSL2: Performing memory compaction. [May 4 17:10] WSL2: Performing memory compaction. [May 4 17:11] WSL2: Performing memory compaction. [May 4 17:12] WSL2: Performing memory compaction. [May 4 17:13] WSL2: Performing memory compaction. [May 4 17:14] WSL2: Performing memory compaction. * * ==> etcd [4891658d1211] <== * {"level":"info","ts":"2022-05-04T15:12:43.255Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":4040,"took":"328.6µs"} {"level":"info","ts":"2022-05-04T15:17:43.260Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":4251} {"level":"info","ts":"2022-05-04T15:17:43.261Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":4251,"took":"389.4µs"} {"level":"info","ts":"2022-05-04T15:22:43.267Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":4463} {"level":"info","ts":"2022-05-04T15:22:43.267Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":4463,"took":"350.7µs"} {"level":"info","ts":"2022-05-04T15:27:43.273Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":4673} {"level":"info","ts":"2022-05-04T15:27:43.273Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":4673,"took":"454.1µs"} {"level":"info","ts":"2022-05-04T15:32:43.280Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":4884} {"level":"info","ts":"2022-05-04T15:32:43.280Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":4884,"took":"435.7µs"} {"level":"warn","ts":"2022-05-04T15:33:05.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.0714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-05-04T15:33:05.785Z","caller":"traceutil/trace.go:171","msg":"trace[2115604453] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:5110; }","duration":"193.1345ms","start":"2022-05-04T15:33:05.592Z","end":"2022-05-04T15:33:05.785Z","steps":["trace[2115604453] 'range keys from in-memory index tree' (duration: 193.0228ms)"],"step_count":1} {"level":"info","ts":"2022-05-04T15:37:43.287Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":5095} {"level":"info","ts":"2022-05-04T15:37:43.287Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":5095,"took":"266.8µs"} {"level":"info","ts":"2022-05-04T15:42:43.293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":5305} {"level":"info","ts":"2022-05-04T15:42:43.293Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":5305,"took":"283.2µs"} {"level":"info","ts":"2022-05-04T15:47:43.301Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":5516} {"level":"info","ts":"2022-05-04T15:47:43.301Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":5516,"took":"440.1µs"} {"level":"info","ts":"2022-05-04T15:52:43.307Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":5727} {"level":"info","ts":"2022-05-04T15:52:43.307Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":5727,"took":"475.1µs"} {"level":"warn","ts":"2022-05-04T15:55:06.972Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.8408ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-05-04T15:55:06.972Z","caller":"traceutil/trace.go:171","msg":"trace[876457667] transaction","detail":"{read_only:false; response_revision:6039; number_of_response:1; }","duration":"316.181ms","start":"2022-05-04T15:55:06.656Z","end":"2022-05-04T15:55:06.972Z","steps":["trace[876457667] 'process raft request' (duration: 103.0086ms)","trace[876457667] 'compare' (duration: 212.7599ms)"],"step_count":2} {"level":"warn","ts":"2022-05-04T15:55:06.972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-04T15:55:06.656Z","time spent":"316.2271ms","remote":"127.0.0.1:56464","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":119,"response count":0,"response size":40,"request content":"compare: success:> failure: >"} {"level":"info","ts":"2022-05-04T15:57:43.313Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":5938} {"level":"info","ts":"2022-05-04T15:57:43.314Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":5938,"took":"327.7µs"} {"level":"info","ts":"2022-05-04T16:02:43.320Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":6149} {"level":"info","ts":"2022-05-04T16:02:43.321Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":6149,"took":"468.2µs"} {"level":"info","ts":"2022-05-04T16:07:43.327Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":6360} {"level":"info","ts":"2022-05-04T16:07:43.327Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":6360,"took":"452.6µs"} {"level":"info","ts":"2022-05-04T16:12:43.333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":6570} {"level":"info","ts":"2022-05-04T16:12:43.333Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":6570,"took":"407.2µs"} {"level":"info","ts":"2022-05-04T16:15:46.913Z","caller":"traceutil/trace.go:171","msg":"trace[844151927] linearizableReadLoop","detail":"{readStateIndex:8783; appliedIndex:8783; }","duration":"183.2347ms","start":"2022-05-04T16:15:46.730Z","end":"2022-05-04T16:15:46.913Z","steps":["trace[844151927] 'read index received' (duration: 183.2303ms)","trace[844151927] 'applied index is now lower than readState.Index' (duration: 3.4µs)"],"step_count":2} {"level":"warn","ts":"2022-05-04T16:15:46.915Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.0241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"} {"level":"info","ts":"2022-05-04T16:15:46.915Z","caller":"traceutil/trace.go:171","msg":"trace[1147571126] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:6908; }","duration":"185.0726ms","start":"2022-05-04T16:15:46.730Z","end":"2022-05-04T16:15:46.915Z","steps":["trace[1147571126] 'agreement among raft nodes before linearized reading' (duration: 183.3092ms)"],"step_count":1} {"level":"info","ts":"2022-05-04T16:17:43.338Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":6780} {"level":"info","ts":"2022-05-04T16:17:43.338Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":6780,"took":"251.3µs"} {"level":"info","ts":"2022-05-04T16:22:43.345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":6992} {"level":"info","ts":"2022-05-04T16:22:43.346Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":6992,"took":"457.9µs"} {"level":"info","ts":"2022-05-04T16:27:43.351Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":7202} {"level":"info","ts":"2022-05-04T16:27:43.351Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":7202,"took":"308.9µs"} {"level":"info","ts":"2022-05-04T16:32:43.358Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":7413} {"level":"info","ts":"2022-05-04T16:32:43.358Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":7413,"took":"496.7µs"} {"level":"info","ts":"2022-05-04T16:36:11.049Z","caller":"etcdserver/server.go:1368","msg":"triggering snapshot","local-member-id":"aec36adc501070cc","local-member-applied-index":10001,"local-member-snapshot-index":0,"local-member-snapshot-count":10000} {"level":"info","ts":"2022-05-04T16:36:11.057Z","caller":"etcdserver/server.go:2363","msg":"saved snapshot","snapshot-index":10001} {"level":"info","ts":"2022-05-04T16:36:11.057Z","caller":"etcdserver/server.go:2393","msg":"compacted Raft logs","compact-index":5001} {"level":"info","ts":"2022-05-04T16:37:43.364Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":7729} {"level":"info","ts":"2022-05-04T16:37:43.365Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":7729,"took":"606µs"} {"level":"info","ts":"2022-05-04T16:42:43.370Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":7939} {"level":"info","ts":"2022-05-04T16:42:43.371Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":7939,"took":"473.8µs"} {"level":"info","ts":"2022-05-04T16:47:43.377Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8151} {"level":"info","ts":"2022-05-04T16:47:43.378Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8151,"took":"470.6µs"} {"level":"info","ts":"2022-05-04T16:52:43.383Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8361} {"level":"info","ts":"2022-05-04T16:52:43.384Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8361,"took":"441.3µs"} {"level":"info","ts":"2022-05-04T16:57:43.390Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8572} {"level":"info","ts":"2022-05-04T16:57:43.391Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8572,"took":"441.3µs"} {"level":"info","ts":"2022-05-04T17:02:43.397Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8782} {"level":"info","ts":"2022-05-04T17:02:43.397Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8782,"took":"444.1µs"} {"level":"info","ts":"2022-05-04T17:07:43.404Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":8994} {"level":"info","ts":"2022-05-04T17:07:43.405Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":8994,"took":"470.4µs"} {"level":"info","ts":"2022-05-04T17:12:43.411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":9204} {"level":"info","ts":"2022-05-04T17:12:43.411Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":9204,"took":"446.8µs"} * * ==> kernel <== * 17:15:14 up 4:17, 0 users, load average: 0.23, 0.15, 0.10 Linux minikube 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [f26dc45d31bc] <== * I0504 13:42:44.665436 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0504 13:42:44.665449 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0504 13:42:44.665210 1 available_controller.go:491] Starting AvailableConditionController I0504 13:42:44.665457 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0504 13:42:44.665479 1 controller.go:85] Starting OpenAPI controller I0504 13:42:44.665512 1 naming_controller.go:291] Starting NamingConditionController I0504 13:42:44.665527 1 crd_finalizer.go:266] Starting CRDFinalizer I0504 13:42:44.665549 1 establishing_controller.go:76] Starting EstablishingController I0504 13:42:44.665564 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0504 13:42:44.665576 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0504 13:42:44.667735 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0504 13:42:44.671352 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0504 13:42:44.731278 1 controller.go:611] quota admission added evaluator for: namespaces I0504 13:42:44.734783 1 shared_informer.go:247] Caches are synced for node_authorizer I0504 13:42:44.775225 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0504 13:42:44.775411 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0504 13:42:44.775436 1 apf_controller.go:322] Running API Priority and Fairness config worker I0504 13:42:44.776165 1 cache.go:39] Caches are synced for AvailableConditionController controller I0504 13:42:44.776182 1 cache.go:39] Caches are synced for autoregister controller I0504 13:42:44.776325 1 shared_informer.go:247] Caches are synced for crd-autoregister I0504 13:42:45.665147 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0504 13:42:45.670264 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0504 13:42:45.672765 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0504 13:42:45.675505 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0504 13:42:45.675531 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0504 13:42:46.018901 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0504 13:42:46.045047 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0504 13:42:46.145492 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0504 13:42:46.151870 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0504 13:42:46.153039 1 controller.go:611] quota admission added evaluator for: endpoints I0504 13:42:46.157462 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0504 13:42:46.785622 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0504 13:42:47.696962 1 controller.go:611] quota admission added evaluator for: deployments.apps I0504 13:42:47.704685 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0504 13:42:47.713132 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0504 13:42:47.927870 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0504 13:43:01.363074 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0504 13:43:01.612910 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0504 13:43:02.054742 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io W0504 13:55:34.661428 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 14:09:59.612267 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 14:22:07.542908 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 14:30:53.454709 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 14:39:54.904475 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 14:54:28.477728 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 15:10:53.436965 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 15:23:46.616755 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 15:39:08.100886 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 15:52:29.355019 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 16:02:17.599498 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 16:09:42.214310 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 16:19:05.410272 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 16:28:21.900453 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted I0504 16:30:10.829549 1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.139.210] I0504 16:30:10.843576 1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.90.209] I0504 16:31:05.708735 1 alloc.go:329] "allocated clusterIPs" service="default/hello-minikube" clusterIPs=map[IPv4:10.108.210.90] W0504 16:41:57.934447 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 16:57:32.848779 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 17:06:50.157621 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 17:14:37.350966 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [e7c5921c99d7] <== * W0504 13:43:00.660277 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0504 13:43:00.660337 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0504 13:43:00.660353 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0504 13:43:00.660494 1 shared_informer.go:247] Caches are synced for endpoint_slice I0504 13:43:00.660494 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0504 13:43:00.660496 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0504 13:43:00.664250 1 shared_informer.go:247] Caches are synced for cronjob I0504 13:43:00.671961 1 shared_informer.go:247] Caches are synced for TTL after finished I0504 13:43:00.689008 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0504 13:43:00.698583 1 shared_informer.go:247] Caches are synced for TTL I0504 13:43:00.707955 1 shared_informer.go:247] Caches are synced for job I0504 13:43:00.709074 1 shared_informer.go:247] Caches are synced for ReplicationController I0504 13:43:00.709263 1 shared_informer.go:247] Caches are synced for PV protection I0504 13:43:00.709309 1 shared_informer.go:247] Caches are synced for GC I0504 13:43:00.709353 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0504 13:43:00.709373 1 shared_informer.go:247] Caches are synced for daemon sets I0504 13:43:00.709644 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0504 13:43:00.717856 1 shared_informer.go:247] Caches are synced for node I0504 13:43:00.717886 1 range_allocator.go:173] Starting range CIDR allocator I0504 13:43:00.717892 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0504 13:43:00.717897 1 shared_informer.go:247] Caches are synced for cidrallocator I0504 13:43:00.720384 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0504 13:43:00.721478 1 shared_informer.go:247] Caches are synced for endpoint I0504 13:43:00.739498 1 shared_informer.go:247] Caches are synced for namespace I0504 13:43:00.759188 1 shared_informer.go:247] Caches are synced for crt configmap I0504 13:43:00.759331 1 shared_informer.go:247] Caches are synced for service account I0504 13:43:00.809450 1 shared_informer.go:247] Caches are synced for persistent volume I0504 13:43:00.809554 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0504 13:43:00.809791 1 shared_informer.go:247] Caches are synced for attach detach I0504 13:43:00.844847 1 shared_informer.go:247] Caches are synced for ReplicaSet I0504 13:43:00.858886 1 shared_informer.go:247] Caches are synced for deployment I0504 13:43:00.859134 1 shared_informer.go:247] Caches are synced for ephemeral I0504 13:43:00.859278 1 shared_informer.go:247] Caches are synced for expand I0504 13:43:00.860239 1 shared_informer.go:247] Caches are synced for PVC protection I0504 13:43:00.860306 1 shared_informer.go:247] Caches are synced for stateful set I0504 13:43:00.908872 1 shared_informer.go:247] Caches are synced for disruption I0504 13:43:00.908916 1 disruption.go:371] Sending events to api server. I0504 13:43:00.913695 1 shared_informer.go:247] Caches are synced for resource quota I0504 13:43:00.944393 1 shared_informer.go:247] Caches are synced for resource quota I0504 13:43:01.326262 1 shared_informer.go:247] Caches are synced for garbage collector I0504 13:43:01.368649 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wm9kq" I0504 13:43:01.409150 1 shared_informer.go:247] Caches are synced for garbage collector I0504 13:43:01.409195 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0504 13:43:01.614870 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1" I0504 13:43:01.714857 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-chc5l" I0504 15:42:57.310418 1 cleaner.go:172] Cleaning CSR "csr-7xb89" as it is more than 1h0m0s old and approved. I0504 16:30:10.559751 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-ccd587f44 to 1" I0504 16:30:10.559784 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-58549894f to 1" I0504 16:30:10.621046 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0504 16:30:10.621296 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0504 16:30:10.624795 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0504 16:30:10.624929 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0504 16:30:10.630491 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0504 16:30:10.630560 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0504 16:30:10.630615 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0504 16:30:10.630629 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0504 16:30:10.636750 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-9tcmb" I0504 16:30:10.636787 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-6p6tz" I0504 16:30:33.917528 1 event.go:294] "Event occurred" object="default/hello-minikube" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-minikube-7bc9d7884c to 1" I0504 16:30:33.925793 1 event.go:294] "Event occurred" object="default/hello-minikube-7bc9d7884c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-minikube-7bc9d7884c-cmznw" * * ==> kube-proxy [14c48a985242] <== * E0504 13:43:02.014996 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin" I0504 13:43:02.016246 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs" I0504 13:43:02.017294 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr" I0504 13:43:02.018246 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr" I0504 13:43:02.019249 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh" I0504 13:43:02.020217 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack" I0504 13:43:02.026305 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0504 13:43:02.026440 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0504 13:43:02.026589 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0504 13:43:02.052972 1 server_others.go:206] "Using iptables Proxier" I0504 13:43:02.053000 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0504 13:43:02.053007 1 server_others.go:214] "Creating dualStackProxier for iptables" I0504 13:43:02.053023 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0504 13:43:02.053241 1 server.go:656] "Version info" version="v1.23.3" I0504 13:43:02.053533 1 config.go:317] "Starting service config controller" I0504 13:43:02.053565 1 shared_informer.go:240] Waiting for caches to sync for service config I0504 13:43:02.053536 1 config.go:226] "Starting endpoint slice config controller" I0504 13:43:02.053584 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0504 13:43:02.154344 1 shared_informer.go:247] Caches are synced for endpoint slice config I0504 13:43:02.154413 1 shared_informer.go:247] Caches are synced for service config E0504 16:31:05.726182 1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :32473: bind: address already in use" port={Description:nodePort for default/hello-minikube IP: IPFamily:4 Port:32473 Protocol:TCP} * * ==> kube-scheduler [38436ca4e127] <== * W0504 13:42:44.717066 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0504 13:42:44.717119 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0504 13:42:44.717137 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0504 13:42:44.717145 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0504 13:42:44.819486 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.3" I0504 13:42:44.821757 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0504 13:42:44.822050 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0504 13:42:44.822275 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0504 13:42:44.822353 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0504 13:42:44.825608 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0504 13:42:44.825719 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0504 13:42:44.826413 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0504 13:42:44.826477 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0504 13:42:44.826502 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0504 13:42:44.826535 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0504 13:42:44.826680 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0504 13:42:44.826726 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0504 13:42:44.826762 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0504 13:42:44.826797 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0504 13:42:44.826862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0504 13:42:44.826808 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0504 13:42:44.827041 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0504 13:42:44.827074 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0504 13:42:44.826979 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0504 13:42:44.827132 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0504 13:42:44.826809 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0504 13:42:44.827212 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0504 13:42:44.827252 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0504 13:42:44.827327 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0504 13:42:44.827356 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0504 13:42:44.827465 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0504 13:42:44.827487 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0504 13:42:44.827509 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0504 13:42:44.828963 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0504 13:42:44.829036 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0504 13:42:44.829061 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0504 13:42:44.829064 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0504 13:42:44.829037 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0504 13:42:44.829087 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0504 13:42:45.682811 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0504 13:42:45.682828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0504 13:42:45.727538 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0504 13:42:45.727571 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0504 13:42:45.730316 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0504 13:42:45.730356 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0504 13:42:45.789281 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0504 13:42:45.789313 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0504 13:42:45.816885 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0504 13:42:45.816923 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0504 13:42:45.816948 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0504 13:42:45.816974 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0504 13:42:45.825830 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0504 13:42:45.825862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0504 13:42:46.019328 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0504 13:42:46.019360 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0504 13:42:47.525209 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0504 13:42:48.249723 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0504 13:42:48.257304 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0504 13:42:48.257343 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" I0504 13:42:48.323934 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2022-05-04 13:42:21 UTC, end at Wed 2022-05-04 17:15:14 UTC. -- May 04 13:43:03 minikube kubelet[1968]: I0504 13:43:03.092794 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-chc5l through plugin: invalid network status for" May 04 13:47:48 minikube kubelet[1968]: W0504 13:47:48.150985 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 13:52:48 minikube kubelet[1968]: W0504 13:52:48.151273 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 13:57:48 minikube kubelet[1968]: W0504 13:57:48.151143 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:02:48 minikube kubelet[1968]: W0504 14:02:48.151482 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:07:48 minikube kubelet[1968]: W0504 14:07:48.151231 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:12:48 minikube kubelet[1968]: W0504 14:12:48.151497 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:17:48 minikube kubelet[1968]: W0504 14:17:48.150671 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:22:48 minikube kubelet[1968]: W0504 14:22:48.150200 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:27:48 minikube kubelet[1968]: W0504 14:27:48.150797 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:32:48 minikube kubelet[1968]: W0504 14:32:48.150777 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:37:48 minikube kubelet[1968]: W0504 14:37:48.151718 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:42:48 minikube kubelet[1968]: W0504 14:42:48.150732 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:47:48 minikube kubelet[1968]: W0504 14:47:48.150387 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:52:48 minikube kubelet[1968]: W0504 14:52:48.150675 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 14:57:48 minikube kubelet[1968]: W0504 14:57:48.151859 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:02:48 minikube kubelet[1968]: W0504 15:02:48.150628 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:07:48 minikube kubelet[1968]: W0504 15:07:48.151033 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:12:48 minikube kubelet[1968]: W0504 15:12:48.151133 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:17:48 minikube kubelet[1968]: W0504 15:17:48.151136 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:22:48 minikube kubelet[1968]: W0504 15:22:48.152244 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:27:48 minikube kubelet[1968]: W0504 15:27:48.151441 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:32:48 minikube kubelet[1968]: W0504 15:32:48.151202 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:37:48 minikube kubelet[1968]: W0504 15:37:48.151092 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:42:48 minikube kubelet[1968]: W0504 15:42:48.151640 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:47:48 minikube kubelet[1968]: W0504 15:47:48.151874 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:52:48 minikube kubelet[1968]: W0504 15:52:48.151357 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 15:57:48 minikube kubelet[1968]: W0504 15:57:48.151598 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:02:48 minikube kubelet[1968]: W0504 16:02:48.150985 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:07:48 minikube kubelet[1968]: W0504 16:07:48.150905 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:12:48 minikube kubelet[1968]: W0504 16:12:48.152798 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:17:48 minikube kubelet[1968]: W0504 16:17:48.151886 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:22:48 minikube kubelet[1968]: W0504 16:22:48.152119 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:27:48 minikube kubelet[1968]: W0504 16:27:48.150479 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.640016 1968 topology_manager.go:200] "Topology Admit Handler" May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.640166 1968 topology_manager.go:200] "Topology Admit Handler" May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.717134 1968 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/62bdf530-98f3-4cd6-ab45-be7c514122f4-tmp-volume\") pod \"kubernetes-dashboard-ccd587f44-9tcmb\" (UID: \"62bdf530-98f3-4cd6-ab45-be7c514122f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-9tcmb" May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.717290 1968 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/e9f94ab8-5905-452e-a148-1e0b216aa806-kube-api-access-fnswz\") pod \"dashboard-metrics-scraper-58549894f-6p6tz\" (UID: \"e9f94ab8-5905-452e-a148-1e0b216aa806\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-6p6tz" May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.717348 1968 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vflf4\" (UniqueName: \"kubernetes.io/projected/62bdf530-98f3-4cd6-ab45-be7c514122f4-kube-api-access-vflf4\") pod \"kubernetes-dashboard-ccd587f44-9tcmb\" (UID: \"62bdf530-98f3-4cd6-ab45-be7c514122f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-9tcmb" May 04 16:30:10 minikube kubelet[1968]: I0504 16:30:10.717392 1968 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e9f94ab8-5905-452e-a148-1e0b216aa806-tmp-volume\") pod \"dashboard-metrics-scraper-58549894f-6p6tz\" (UID: \"e9f94ab8-5905-452e-a148-1e0b216aa806\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-6p6tz" May 04 16:30:11 minikube kubelet[1968]: I0504 16:30:11.801190 1968 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2418c498942f7338575c23a5a4ae77078ae133348f17820c715df0f9101369fb" May 04 16:30:11 minikube kubelet[1968]: I0504 16:30:11.801680 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-9tcmb through plugin: invalid network status for" May 04 16:30:11 minikube kubelet[1968]: I0504 16:30:11.864302 1968 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f4f4187697f1ca9420f97ca62d8643fe0e14a4ce88a3b4f352f4e5997474c666" May 04 16:30:11 minikube kubelet[1968]: I0504 16:30:11.864628 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-6p6tz through plugin: invalid network status for" May 04 16:30:12 minikube kubelet[1968]: I0504 16:30:12.872516 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-6p6tz through plugin: invalid network status for" May 04 16:30:12 minikube kubelet[1968]: I0504 16:30:12.878700 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-9tcmb through plugin: invalid network status for" May 04 16:30:33 minikube kubelet[1968]: I0504 16:30:33.929143 1968 topology_manager.go:200] "Topology Admit Handler" May 04 16:30:34 minikube kubelet[1968]: I0504 16:30:34.056423 1968 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbs47\" (UniqueName: \"kubernetes.io/projected/84d69839-409e-45d3-a446-752e5d34dd67-kube-api-access-gbs47\") pod \"hello-minikube-7bc9d7884c-cmznw\" (UID: \"84d69839-409e-45d3-a446-752e5d34dd67\") " pod="default/hello-minikube-7bc9d7884c-cmznw" May 04 16:30:34 minikube kubelet[1968]: I0504 16:30:34.807798 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-minikube-7bc9d7884c-cmznw through plugin: invalid network status for" May 04 16:30:34 minikube kubelet[1968]: I0504 16:30:34.977420 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-minikube-7bc9d7884c-cmznw through plugin: invalid network status for" May 04 16:30:42 minikube kubelet[1968]: I0504 16:30:42.037174 1968 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-minikube-7bc9d7884c-cmznw through plugin: invalid network status for" May 04 16:32:48 minikube kubelet[1968]: W0504 16:32:48.151593 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:37:48 minikube kubelet[1968]: W0504 16:37:48.150660 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:42:48 minikube kubelet[1968]: W0504 16:42:48.151636 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:47:48 minikube kubelet[1968]: W0504 16:47:48.150823 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:52:48 minikube kubelet[1968]: W0504 16:52:48.151962 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 16:57:48 minikube kubelet[1968]: W0504 16:57:48.151449 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 17:02:48 minikube kubelet[1968]: W0504 17:02:48.151228 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 17:07:48 minikube kubelet[1968]: W0504 17:07:48.150578 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology May 04 17:12:48 minikube kubelet[1968]: W0504 17:12:48.151706 1968 sysinfo.go:203] Nodes topology is not available, providing CPU topology * * ==> kubernetes-dashboard [8ba775eddb72] <== * W0504 16:30:18.973548 1 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob 2022/05/04 16:30:13 Getting list of all replica sets in the cluster 2022/05/04 16:30:13 [2022-05-04T16:30:13Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:13 Getting list of all replication controllers in the cluster 2022/05/04 16:30:13 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:13 [2022-05-04T16:30:13Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:13 [2022-05-04T16:30:13Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:13 Getting list of all pet sets in the cluster 2022/05/04 16:30:13 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:13 [2022-05-04T16:30:13Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:13 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:13 [2022-05-04T16:30:13Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of namespaces 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all cron jobs in the cluster 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all deployments in the cluster 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all jobs in the cluster 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all pods in the cluster 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 Getting pod metrics 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all replica sets in the cluster 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all replication controllers in the cluster 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:18 Getting list of all pet sets in the cluster 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:18 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:18 [2022-05-04T16:30:18Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:19 [2022-05-04T16:30:19Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 127.0.0.1: 2022/05/04 16:30:19 [2022-05-04T16:30:19Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:19 [2022-05-04T16:30:19Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:19 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:19 [2022-05-04T16:30:19Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:20 [2022-05-04T16:30:20Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2022/05/04 16:30:20 [2022-05-04T16:30:20Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1: 2022/05/04 16:30:20 Getting list of namespaces 2022/05/04 16:30:20 Internal error occurred: No metric client provided. Skipping metrics. 2022/05/04 16:30:20 [2022-05-04T16:30:20Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:20 [2022-05-04T16:30:20Z] Outcoming response to 127.0.0.1 with 200 status code 2022/05/04 16:30:42 Successful request to sidecar * * ==> storage-provisioner [08462ed50b1f] <== * I0504 13:43:02.910711 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0504 13:43:02.917343 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0504 13:43:02.917385 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0504 13:43:02.922519 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0504 13:43:02.922618 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_55d2407e-af0e-4721-93fb-bd3c5e68c207! I0504 13:43:02.922672 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70093741-56b1-44d8-ad44-b898491eae09", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_55d2407e-af0e-4721-93fb-bd3c5e68c207 became leader I0504 13:43:03.022863 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_55d2407e-af0e-4721-93fb-bd3c5e68c207!