* * ==> Audit <== * |------------|---------------------------------------|----------------|----------|---------|----------------------|----------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|---------------------------------------|----------------|----------|---------|----------------------|----------------------| | image | e2e-cri-o image load | e2e-cri-o | jedmeier | v1.30.1 | 14 Jun 23 23:23 CEST | 14 Jun 23 23:23 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 14 Jun 23 23:23 CEST | 14 Jun 23 23:23 CEST | | | stress-ng | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 14 Jun 23 23:24 CEST | 14 Jun 23 23:24 CEST | | | stress-ng | | | | | | | delete | e2e-cri-o delete | e2e-cri-o | jedmeier | v1.30.1 | 14 Jun 23 23:24 CEST | 14 Jun 23 23:24 CEST | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:23 CEST | 15 Jun 23 08:23 CEST | | start | e2e-docker start | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:23 CEST | 15 Jun 23 08:23 CEST | | | --keep-context | | | | | | | | --container-runtime=docker | | | | | | | | --driver=docker | | | | | | | image | e2e-docker image load | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:23 CEST | 15 Jun 23 08:23 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:24 CEST | 15 Jun 23 08:24 CEST | | | -9 stress-ng | | | | | | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:24 CEST | 15 Jun 23 08:24 CEST | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:24 CEST | 15 Jun 23 08:24 CEST | | start | e2e-docker start | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:24 CEST | 15 Jun 23 08:24 CEST | | | --keep-context | | | | | | | | --container-runtime=docker | | | | | | | | --driver=docker | | | | | | | image | e2e-docker image load | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:24 CEST | 15 Jun 23 08:25 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:25 CEST | 15 Jun 23 08:25 CEST | | | -9 stress-ng | | | | | | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:25 CEST | 15 Jun 23 08:25 CEST | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:32 CEST | 15 Jun 23 08:32 CEST | | start | e2e-docker start | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:32 CEST | | | | --keep-context | | | | | | | | --container-runtime=docker | | | | | | | | --driver=docker | | | | | | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:32 CEST | 15 Jun 23 08:32 CEST | | start | e2e-docker start | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:32 CEST | 15 Jun 23 08:33 CEST | | | --keep-context | | | | | | | | --container-runtime=docker | | | | | | | | --driver=docker | | | | | | | image | e2e-docker image load | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:33 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:33 CEST | | | -9 stress-ng | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:33 CEST | | | -9 stress-ng | | | | | | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:33 CEST | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:33 CEST | | start | e2e-containerd | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:33 CEST | 15 Jun 23 08:34 CEST | | | start --keep-context | | | | | | | | --container-runtime=containerd | | | | | | | | --driver=docker | | | | | | | image | e2e-containerd image load | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-containerd ssh -- sudo | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | | pkill -9 stress-ng | | | | | | | ssh | e2e-containerd ssh -- sudo | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | | pkill -9 stress-ng | | | | | | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | delete | e2e-cri-o delete | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | start | e2e-cri-o start --keep-context | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:34 CEST | 15 Jun 23 08:34 CEST | | | --container-runtime=cri-o | | | | | | | | --driver=docker --cni=bridge | | | | | | | image | e2e-cri-o image load | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:35 CEST | 15 Jun 23 08:35 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:35 CEST | 15 Jun 23 08:35 CEST | | | stress-ng | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:35 CEST | 15 Jun 23 08:35 CEST | | | stress-ng | | | | | | | delete | e2e-cri-o delete | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 08:35 CEST | 15 Jun 23 08:35 CEST | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 10:21 CEST | 15 Jun 23 10:21 CEST | | start | e2e-containerd | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 10:21 CEST | 15 Jun 23 10:21 CEST | | | start --keep-context | | | | | | | | --container-runtime=containerd | | | | | | | | --driver=docker | | | | | | | image | e2e-containerd image load | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 10:22 CEST | 15 Jun 23 10:22 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-containerd ssh -- sudo | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 10:22 CEST | 15 Jun 23 10:22 CEST | | | pkill -9 stress-ng | | | | | | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 10:22 CEST | 15 Jun 23 10:22 CEST | | completion | bash | minikube | jedmeier | v1.30.1 | 15 Jun 23 10:33 CEST | 15 Jun 23 10:33 CEST | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:13 CEST | 15 Jun 23 11:13 CEST | | start | e2e-docker start | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:13 CEST | 15 Jun 23 11:13 CEST | | | --keep-context | | | | | | | | --container-runtime=docker | | | | | | | | --driver=docker | | | | | | | image | e2e-docker image load | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | | -9 stress-ng | | | | | | | ssh | e2e-docker ssh -- sudo pkill | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | | -9 stress-ng | | | | | | | delete | e2e-docker delete | e2e-docker | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | start | e2e-containerd | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:14 CEST | 15 Jun 23 11:14 CEST | | | start --keep-context | | | | | | | | --container-runtime=containerd | | | | | | | | --driver=docker | | | | | | | image | e2e-containerd image load | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-containerd ssh -- sudo | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | | pkill -9 stress-ng | | | | | | | ssh | e2e-containerd ssh -- sudo | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | | pkill -9 stress-ng | | | | | | | delete | e2e-containerd delete | e2e-containerd | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | delete | e2e-cri-o delete | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | start | e2e-cri-o start --keep-context | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:15 CEST | 15 Jun 23 11:15 CEST | | | --container-runtime=cri-o | | | | | | | | --driver=docker --cni=bridge | | | | | | | image | e2e-cri-o image load | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:16 CEST | 15 Jun 23 11:16 CEST | | | docker.io/library/extension-container | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:16 CEST | 15 Jun 23 11:16 CEST | | | stress-ng | | | | | | | ssh | e2e-cri-o ssh -- sudo pkill -9 | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:16 CEST | 15 Jun 23 11:16 CEST | | | stress-ng | | | | | | | delete | e2e-cri-o delete | e2e-cri-o | jedmeier | v1.30.1 | 15 Jun 23 11:16 CEST | 15 Jun 23 11:16 CEST | | start | --container-runtime cri-o | minikube | jedmeier | v1.30.1 | 15 Jun 23 11:17 CEST | 15 Jun 23 11:17 CEST | | completion | bash | minikube | jedmeier | v1.30.1 | 15 Jun 23 11:19 CEST | 15 Jun 23 11:19 CEST | |------------|---------------------------------------|----------------|----------|---------|----------------------|----------------------| * * ==> Last Start <== * Log file created at: 2023/06/15 11:17:33 Running on machine: joshiste-mbp Binary: Built with gc go1.20.2 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0615 11:17:33.410412 16060 out.go:296] Setting OutFile to fd 1 ... I0615 11:17:33.410563 16060 out.go:348] isatty.IsTerminal(1) = true I0615 11:17:33.410564 16060 out.go:309] Setting ErrFile to fd 2... I0615 11:17:33.410567 16060 out.go:348] isatty.IsTerminal(2) = true I0615 11:17:33.410660 16060 root.go:336] Updating PATH: /Users/jedmeier/.minikube/bin I0615 11:17:33.412268 16060 out.go:303] Setting JSON to false I0615 11:17:33.439968 16060 start.go:125] hostinfo: {"hostname":"joshiste-mbp","uptime":1383024,"bootTime":1685437629,"procs":653,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"3692cb5c-d979-5956-8a1e-9ad6c41df7d4"} W0615 11:17:33.440063 16060 start.go:133] gopshost.Virtualization returned error: not implemented yet I0615 11:17:33.446906 16060 out.go:177] 😄 minikube v1.30.1 on Darwin 13.4 (arm64) I0615 11:17:33.452134 16060 notify.go:220] Checking for updates... I0615 11:17:33.452261 16060 driver.go:375] Setting default libvirt URI to qemu:///system I0615 11:17:33.452292 16060 global.go:111] Querying for installed drivers using PATH=/Users/jedmeier/.minikube/bin:/Users/jedmeier/.sdkman/candidates/visualvm/current/bin:/Users/jedmeier/.sdkman/candidates/maven/current/bin:/Users/jedmeier/.sdkman/candidates/java/current/bin:/Users/jedmeier/.nvm/versions/node/v18.12.1/bin:/Users/jedmeier/.local/bin:/Users/jedmeier/.cargo/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/opt/homebrew/bin:/usr/local/bin:/usr/local/sbin:/Users/jedmeier/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/usr/local/lib/ruby/gems/2.6.0/bin:/Users/jedmeier/go/bin:/Users/jedmeier/.krew/bin:/Applications/IntelliJ IDEA.app/Contents/MacOS I0615 11:17:33.452301 16060 global.go:122] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/ Version:} I0615 11:17:33.518505 16060 docker.go:121] docker version: linux-23.0.5:Docker Desktop 4.19.0 (106363) I0615 11:17:33.518653 16060 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0615 11:17:33.605480 16060 info.go:266] docker info: {ID:14d49106-8035-4d0c-bcda-1af0944ec9d5 Containers:17 ContainersRunning:1 ContainersPaused:0 ContainersStopped:16 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:1954 OomKillDisable:false NGoroutines:3866 SystemTime:2023-06-15 09:17:33.591713795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:12544057344 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:23.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2806fc1057397dbaeefbea0e4e17bddfbd388f38 Expected:2806fc1057397dbaeefbea0e4e17bddfbd388f38} RuncCommit:{ID:v1.1.5-0-gf19387a Expected:v1.1.5-0-gf19387a} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jedmeier/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.4] map[Name:compose Path:/Users/jedmeier/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.17.3] map[Name:dev Path:/Users/jedmeier/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jedmeier/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path:/Users/jedmeier/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jedmeier/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jedmeier/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jedmeier/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.10.0]] Warnings:}} I0615 11:17:33.605559 16060 global.go:122] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0615 11:17:33.605694 16060 global.go:122] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:} I0615 11:17:33.605703 16060 global.go:122] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0615 11:17:33.606753 16060 global.go:122] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0615 11:17:33.606849 16060 global.go:122] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ Version:} I0615 11:17:33.607074 16060 global.go:122] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0615 11:17:33.608758 16060 global.go:122] qemu2 default: true priority: 7, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0615 11:17:33.608882 16060 global.go:122] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0615 11:17:33.608893 16060 driver.go:310] not recommending "ssh" due to default: false I0615 11:17:33.608901 16060 driver.go:345] Picked: docker I0615 11:17:33.608903 16060 driver.go:346] Alternatives: [qemu2 ssh] I0615 11:17:33.608905 16060 driver.go:347] Rejects: [vmwarefusion parallels vmware hyperkit podman virtualbox] I0615 11:17:33.613520 16060 out.go:177] ✨ Automatically selected the docker driver. Other choices: qemu2, ssh I0615 11:17:33.621788 16060 start.go:295] selected driver: docker I0615 11:17:33.621794 16060 start.go:870] validating driver "docker" against I0615 11:17:33.621804 16060 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0615 11:17:33.621986 16060 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0615 11:17:33.704994 16060 info.go:266] docker info: {ID:14d49106-8035-4d0c-bcda-1af0944ec9d5 Containers:17 ContainersRunning:1 ContainersPaused:0 ContainersStopped:16 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:1954 OomKillDisable:false NGoroutines:3866 SystemTime:2023-06-15 09:17:33.691736128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:12544057344 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:23.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2806fc1057397dbaeefbea0e4e17bddfbd388f38 Expected:2806fc1057397dbaeefbea0e4e17bddfbd388f38} RuncCommit:{ID:v1.1.5-0-gf19387a Expected:v1.1.5-0-gf19387a} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jedmeier/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.4] map[Name:compose Path:/Users/jedmeier/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.17.3] map[Name:dev Path:/Users/jedmeier/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jedmeier/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path:/Users/jedmeier/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jedmeier/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jedmeier/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jedmeier/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.10.0]] Warnings:}} I0615 11:17:33.705096 16060 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0615 11:17:33.705297 16060 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true] I0615 11:17:33.710006 16060 out.go:177] 📌 Using Docker Desktop driver with root privileges I0615 11:17:33.714537 16060 cni.go:84] Creating CNI manager for "" I0615 11:17:33.714544 16060 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet I0615 11:17:33.714548 16060 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni I0615 11:17:33.714555 16060 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:10240 CPUs:8 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0615 11:17:33.723772 16060 out.go:177] 👍 Starting control plane node minikube in cluster minikube I0615 11:17:33.727907 16060 cache.go:120] Beginning downloading kic base image for docker with crio I0615 11:17:33.732393 16060 out.go:177] 🚜 Pulling base image ... I0615 11:17:33.740834 16060 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local docker daemon I0615 11:17:33.740842 16060 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime crio I0615 11:17:33.740888 16060 preload.go:148] Found local preload: /Users/jedmeier/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-cri-o-overlay-arm64.tar.lz4 I0615 11:17:33.740897 16060 cache.go:57] Caching tarball of preloaded images I0615 11:17:33.741019 16060 preload.go:174] Found /Users/jedmeier/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download I0615 11:17:33.741025 16060 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on crio I0615 11:17:33.741915 16060 profile.go:148] Saving config to /Users/jedmeier/.minikube/profiles/minikube/config.json ... I0615 11:17:33.741944 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/config.json: {Name:mkb04de082159bc0806885eb456d092b2f91af18 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:33.801628 16060 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local docker daemon, skipping pull I0615 11:17:33.801658 16060 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 exists in daemon, skipping load I0615 11:17:33.801679 16060 cache.go:193] Successfully downloaded all kic artifacts I0615 11:17:33.801717 16060 start.go:364] acquiring machines lock for minikube: {Name:mk4ebc7dc97f1a0d5c02c8e9af5ad9f9b2f6c35d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 11:17:33.803716 16060 start.go:368] acquired machines lock for "minikube" in 1.989958ms I0615 11:17:33.803731 16060 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:10240 CPUs:8 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:crio ControlPlane:true Worker:true} I0615 11:17:33.803795 16060 start.go:125] createHost starting for "" (driver="docker") I0615 11:17:33.813919 16060 out.go:204] 🔥 Creating docker container (CPUs=8, Memory=10240MB) ... I0615 11:17:33.814364 16060 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0615 11:17:33.814383 16060 client.go:168] LocalClient.Create starting I0615 11:17:33.814482 16060 main.go:141] libmachine: Reading certificate data from /Users/jedmeier/.minikube/certs/ca.pem I0615 11:17:33.814509 16060 main.go:141] libmachine: Decoding PEM data... I0615 11:17:33.814523 16060 main.go:141] libmachine: Parsing certificate... I0615 11:17:33.814573 16060 main.go:141] libmachine: Reading certificate data from /Users/jedmeier/.minikube/certs/cert.pem I0615 11:17:33.814588 16060 main.go:141] libmachine: Decoding PEM data... I0615 11:17:33.814592 16060 main.go:141] libmachine: Parsing certificate... I0615 11:17:33.814998 16060 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0615 11:17:33.863891 16060 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0615 11:17:33.864013 16060 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0615 11:17:33.864025 16060 cli_runner.go:164] Run: docker network inspect minikube W0615 11:17:33.917025 16060 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0615 11:17:33.917044 16060 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0615 11:17:33.917055 16060 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0615 11:17:33.917189 16060 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0615 11:17:33.970414 16060 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x140006d62c0} I0615 11:17:33.970442 16060 network_create.go:123] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ... I0615 11:17:33.970533 16060 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0615 11:17:34.062662 16060 network_create.go:107] docker network minikube 192.168.49.0/24 created I0615 11:17:34.062686 16060 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0615 11:17:34.062870 16060 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0615 11:17:34.108563 16060 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0615 11:17:34.152367 16060 oci.go:103] Successfully created a docker volume minikube I0615 11:17:34.152514 16060 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -d /var/lib I0615 11:17:34.565056 16060 oci.go:107] Successfully prepared a docker volume minikube I0615 11:17:34.565110 16060 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime crio I0615 11:17:34.565136 16060 kic.go:190] Starting extracting preloaded images to volume ... I0615 11:17:34.565443 16060 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jedmeier/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir I0615 11:17:37.362042 16060 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jedmeier/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir: (2.79653775s) I0615 11:17:37.362090 16060 kic.go:199] duration metric: took 2.796972 seconds to extract preloaded images to volume I0615 11:17:37.362284 16060 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0615 11:17:37.446089 16060 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=10240mb --memory-swap=10240mb --cpus=8 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 I0615 11:17:37.781038 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0615 11:17:37.827704 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:37.875680 16060 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0615 11:17:37.956217 16060 oci.go:144] the created container "minikube" has a running status. I0615 11:17:37.956256 16060 kic.go:221] Creating ssh key for kic: /Users/jedmeier/.minikube/machines/minikube/id_rsa... I0615 11:17:37.991583 16060 kic_runner.go:191] docker (temp): /Users/jedmeier/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0615 11:17:38.078707 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:38.126150 16060 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0615 11:17:38.126171 16060 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0615 11:17:38.197511 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:38.243476 16060 machine.go:88] provisioning docker machine ... I0615 11:17:38.243510 16060 ubuntu.go:169] provisioning hostname "minikube" I0615 11:17:38.243650 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:38.285576 16060 main.go:141] libmachine: Using SSH client type: native I0615 11:17:38.285912 16060 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052b1a20] 0x1052b4400 [] 0s} 127.0.0.1 54479 } I0615 11:17:38.285924 16060 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0615 11:17:38.427979 16060 main.go:141] libmachine: SSH cmd err, output: : minikube I0615 11:17:38.428104 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:38.472225 16060 main.go:141] libmachine: Using SSH client type: native I0615 11:17:38.472507 16060 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052b1a20] 0x1052b4400 [] 0s} 127.0.0.1 54479 } I0615 11:17:38.472513 16060 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0615 11:17:38.605464 16060 main.go:141] libmachine: SSH cmd err, output: : I0615 11:17:38.605482 16060 ubuntu.go:175] set auth options {CertDir:/Users/jedmeier/.minikube CaCertPath:/Users/jedmeier/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jedmeier/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jedmeier/.minikube/machines/server.pem ServerKeyPath:/Users/jedmeier/.minikube/machines/server-key.pem ClientKeyPath:/Users/jedmeier/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jedmeier/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jedmeier/.minikube} I0615 11:17:38.605505 16060 ubuntu.go:177] setting up certificates I0615 11:17:38.605513 16060 provision.go:83] configureAuth start I0615 11:17:38.605628 16060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0615 11:17:38.652423 16060 provision.go:138] copyHostCerts I0615 11:17:38.652517 16060 exec_runner.go:144] found /Users/jedmeier/.minikube/ca.pem, removing ... I0615 11:17:38.652520 16060 exec_runner.go:207] rm: /Users/jedmeier/.minikube/ca.pem I0615 11:17:38.652647 16060 exec_runner.go:151] cp: /Users/jedmeier/.minikube/certs/ca.pem --> /Users/jedmeier/.minikube/ca.pem (1082 bytes) I0615 11:17:38.652853 16060 exec_runner.go:144] found /Users/jedmeier/.minikube/cert.pem, removing ... I0615 11:17:38.652855 16060 exec_runner.go:207] rm: /Users/jedmeier/.minikube/cert.pem I0615 11:17:38.652906 16060 exec_runner.go:151] cp: /Users/jedmeier/.minikube/certs/cert.pem --> /Users/jedmeier/.minikube/cert.pem (1127 bytes) I0615 11:17:38.653026 16060 exec_runner.go:144] found /Users/jedmeier/.minikube/key.pem, removing ... I0615 11:17:38.653028 16060 exec_runner.go:207] rm: /Users/jedmeier/.minikube/key.pem I0615 11:17:38.653103 16060 exec_runner.go:151] cp: /Users/jedmeier/.minikube/certs/key.pem --> /Users/jedmeier/.minikube/key.pem (1679 bytes) I0615 11:17:38.653636 16060 provision.go:112] generating server cert: /Users/jedmeier/.minikube/machines/server.pem ca-key=/Users/jedmeier/.minikube/certs/ca.pem private-key=/Users/jedmeier/.minikube/certs/ca-key.pem org=jedmeier.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0615 11:17:38.716965 16060 provision.go:172] copyRemoteCerts I0615 11:17:38.717073 16060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0615 11:17:38.717122 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:38.759343 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:38.860790 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0615 11:17:38.883487 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0615 11:17:38.904214 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0615 11:17:38.922708 16060 provision.go:86] duration metric: configureAuth took 317.191708ms I0615 11:17:38.922716 16060 ubuntu.go:193] setting minikube options for container-runtime I0615 11:17:38.922879 16060 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.26.3 I0615 11:17:38.923016 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:38.966099 16060 main.go:141] libmachine: Using SSH client type: native I0615 11:17:38.966393 16060 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052b1a20] 0x1052b4400 [] 0s} 127.0.0.1 54479 } I0615 11:17:38.966405 16060 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) " CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 ' " | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio I0615 11:17:39.192780 16060 main.go:141] libmachine: SSH cmd err, output: : CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 ' I0615 11:17:39.192798 16060 machine.go:91] provisioned docker machine in 949.310916ms I0615 11:17:39.192807 16060 client.go:171] LocalClient.Create took 5.378454542s I0615 11:17:39.192844 16060 start.go:167] duration metric: libmachine.API.Create for "minikube" took 5.3785115s I0615 11:17:39.192851 16060 start.go:300] post-start starting for "minikube" (driver="docker") I0615 11:17:39.192857 16060 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0615 11:17:39.193085 16060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0615 11:17:39.193581 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:39.237952 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:39.333836 16060 ssh_runner.go:195] Run: cat /etc/os-release I0615 11:17:39.337734 16060 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0615 11:17:39.337742 16060 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0615 11:17:39.337746 16060 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0615 11:17:39.337748 16060 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0615 11:17:39.337753 16060 filesync.go:126] Scanning /Users/jedmeier/.minikube/addons for local assets ... I0615 11:17:39.337842 16060 filesync.go:126] Scanning /Users/jedmeier/.minikube/files for local assets ... I0615 11:17:39.337873 16060 start.go:303] post-start completed in 145.02ms I0615 11:17:39.338356 16060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0615 11:17:39.383278 16060 profile.go:148] Saving config to /Users/jedmeier/.minikube/profiles/minikube/config.json ... I0615 11:17:39.383744 16060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0615 11:17:39.383797 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:39.427931 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:39.520181 16060 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0615 11:17:39.525643 16060 start.go:128] duration metric: createHost completed in 5.721863875s I0615 11:17:39.525656 16060 start.go:83] releasing machines lock for "minikube", held for 5.721972958s I0615 11:17:39.525774 16060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0615 11:17:39.579163 16060 ssh_runner.go:195] Run: cat /version.json I0615 11:17:39.579253 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:39.580058 16060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0615 11:17:39.580238 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:39.623437 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:39.623585 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:39.777796 16060 ssh_runner.go:195] Run: systemctl --version I0615 11:17:39.783010 16060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null" I0615 11:17:39.931850 16060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0615 11:17:39.938066 16060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ; I0615 11:17:39.954260 16060 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found I0615 11:17:39.954566 16060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0615 11:17:39.970424 16060 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0615 11:17:39.970437 16060 start.go:481] detecting cgroup driver to use... I0615 11:17:39.970729 16060 detect.go:196] detected "cgroupfs" cgroup driver on host os I0615 11:17:39.971142 16060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd I0615 11:17:39.986330 16060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0615 11:17:39.996669 16060 docker.go:193] disabling cri-docker service (if available) ... I0615 11:17:39.996925 16060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket I0615 11:17:40.007799 16060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service I0615 11:17:40.018793 16060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket I0615 11:17:40.081368 16060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service I0615 11:17:40.143396 16060 docker.go:209] disabling docker service ... I0615 11:17:40.143637 16060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket I0615 11:17:40.161275 16060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service I0615 11:17:40.172037 16060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket I0615 11:17:40.233715 16060 ssh_runner.go:195] Run: sudo systemctl mask docker.service I0615 11:17:40.364437 16060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker I0615 11:17:40.377594 16060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock " | sudo tee /etc/crictl.yaml" I0615 11:17:40.391796 16060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image... I0615 11:17:40.392047 16060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf" I0615 11:17:40.401172 16060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver... I0615 11:17:40.401432 16060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf" I0615 11:17:40.410775 16060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf" I0615 11:17:40.420303 16060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf" I0615 11:17:40.429262 16060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0615 11:17:40.437798 16060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0615 11:17:40.445224 16060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0615 11:17:40.453699 16060 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0615 11:17:40.499620 16060 ssh_runner.go:195] Run: sudo systemctl restart crio I0615 11:17:40.569456 16060 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock I0615 11:17:40.569651 16060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock I0615 11:17:40.574015 16060 start.go:549] Will wait 60s for crictl version I0615 11:17:40.574256 16060 ssh_runner.go:195] Run: which crictl I0615 11:17:40.578627 16060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0615 11:17:40.603478 16060 start.go:565] Version: 0.1.0 RuntimeName: cri-o RuntimeVersion: 1.24.4 RuntimeApiVersion: v1alpha2 I0615 11:17:40.603757 16060 ssh_runner.go:195] Run: crio --version I0615 11:17:40.637313 16060 ssh_runner.go:195] Run: crio --version I0615 11:17:40.690808 16060 out.go:177] 🎁 Preparing Kubernetes v1.26.3 on CRI-O 1.24.4 ... I0615 11:17:40.697731 16060 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal I0615 11:17:40.789104 16060 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254 I0615 11:17:40.789326 16060 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts I0615 11:17:40.793861 16060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0615 11:17:40.804066 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0615 11:17:40.847521 16060 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime crio I0615 11:17:40.847629 16060 ssh_runner.go:195] Run: sudo crictl images --output json I0615 11:17:40.876720 16060 crio.go:501] all images are preloaded for cri-o runtime. I0615 11:17:40.876726 16060 crio.go:420] Images already preloaded, skipping extraction I0615 11:17:40.876852 16060 ssh_runner.go:195] Run: sudo crictl images --output json I0615 11:17:40.900846 16060 crio.go:501] all images are preloaded for cri-o runtime. I0615 11:17:40.900867 16060 cache_images.go:84] Images are preloaded, skipping loading I0615 11:17:40.901046 16060 ssh_runner.go:195] Run: crio config I0615 11:17:40.938946 16060 cni.go:84] Creating CNI manager for "" I0615 11:17:40.938951 16060 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet I0615 11:17:40.938960 16060 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0615 11:17:40.938968 16060 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0615 11:17:40.939050 16060 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/crio/crio.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0615 11:17:40.940050 16060 kubeadm.go:968] kubelet [Unit] Wants=crio.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0615 11:17:40.940167 16060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0615 11:17:40.947759 16060 binaries.go:44] Found k8s binaries, skipping transfer I0615 11:17:40.947865 16060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0615 11:17:40.955982 16060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes) I0615 11:17:40.969158 16060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0615 11:17:40.982965 16060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes) I0615 11:17:40.996067 16060 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0615 11:17:41.000256 16060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0615 11:17:41.010957 16060 certs.go:56] Setting up /Users/jedmeier/.minikube/profiles/minikube for IP: 192.168.49.2 I0615 11:17:41.010985 16060 certs.go:186] acquiring lock for shared ca certs: {Name:mk9ee36038ec1e3347adbb40fa510a79e79ccea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.011432 16060 certs.go:195] skipping minikubeCA CA generation: /Users/jedmeier/.minikube/ca.key I0615 11:17:41.012862 16060 certs.go:195] skipping proxyClientCA CA generation: /Users/jedmeier/.minikube/proxy-client-ca.key I0615 11:17:41.013131 16060 certs.go:315] generating minikube-user signed cert: /Users/jedmeier/.minikube/profiles/minikube/client.key I0615 11:17:41.013143 16060 crypto.go:68] Generating cert /Users/jedmeier/.minikube/profiles/minikube/client.crt with IP's: [] I0615 11:17:41.075224 16060 crypto.go:156] Writing cert to /Users/jedmeier/.minikube/profiles/minikube/client.crt ... I0615 11:17:41.075233 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/client.crt: {Name:mka47cadf73511c0006cea467b2c72ed8c7781d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.076835 16060 crypto.go:164] Writing key to /Users/jedmeier/.minikube/profiles/minikube/client.key ... I0615 11:17:41.076839 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/client.key: {Name:mk6c5aee7b479606a3b1d4e9e0ed0cd7c2e9a739 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.077015 16060 certs.go:315] generating minikube signed cert: /Users/jedmeier/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0615 11:17:41.077022 16060 crypto.go:68] Generating cert /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0615 11:17:41.168586 16060 crypto.go:156] Writing cert to /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0615 11:17:41.168595 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkbdfb05ccd7053b6f325f5b159705b6b0f59095 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.170069 16060 crypto.go:164] Writing key to /Users/jedmeier/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0615 11:17:41.170077 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkf68d5cda116e9b6071ecdd31224fd8d85ea4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.170239 16060 certs.go:333] copying /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt I0615 11:17:41.170358 16060 certs.go:337] copying /Users/jedmeier/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/jedmeier/.minikube/profiles/minikube/apiserver.key I0615 11:17:41.170466 16060 certs.go:315] generating aggregator signed cert: /Users/jedmeier/.minikube/profiles/minikube/proxy-client.key I0615 11:17:41.170472 16060 crypto.go:68] Generating cert /Users/jedmeier/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0615 11:17:41.289394 16060 crypto.go:156] Writing cert to /Users/jedmeier/.minikube/profiles/minikube/proxy-client.crt ... I0615 11:17:41.289403 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/proxy-client.crt: {Name:mk4a62297377033cb8cbe653a6806a9158260d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.290909 16060 crypto.go:164] Writing key to /Users/jedmeier/.minikube/profiles/minikube/proxy-client.key ... I0615 11:17:41.290912 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.minikube/profiles/minikube/proxy-client.key: {Name:mk215024b5a1b1acc5c4ebd51babb27a12d980f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:41.291735 16060 certs.go:401] found cert: /Users/jedmeier/.minikube/certs/Users/jedmeier/.minikube/certs/ca-key.pem (1675 bytes) I0615 11:17:41.291761 16060 certs.go:401] found cert: /Users/jedmeier/.minikube/certs/Users/jedmeier/.minikube/certs/ca.pem (1082 bytes) I0615 11:17:41.291780 16060 certs.go:401] found cert: /Users/jedmeier/.minikube/certs/Users/jedmeier/.minikube/certs/cert.pem (1127 bytes) I0615 11:17:41.291797 16060 certs.go:401] found cert: /Users/jedmeier/.minikube/certs/Users/jedmeier/.minikube/certs/key.pem (1679 bytes) I0615 11:17:41.292115 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0615 11:17:41.334044 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0615 11:17:41.355498 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0615 11:17:41.377594 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0615 11:17:41.396473 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0615 11:17:41.414556 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0615 11:17:41.432058 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0615 11:17:41.449866 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0615 11:17:41.467391 16060 ssh_runner.go:362] scp /Users/jedmeier/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0615 11:17:41.485200 16060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0615 11:17:41.499217 16060 ssh_runner.go:195] Run: openssl version I0615 11:17:41.505318 16060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0615 11:17:41.515367 16060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0615 11:17:41.520030 16060 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jun 17 2021 /usr/share/ca-certificates/minikubeCA.pem I0615 11:17:41.520238 16060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0615 11:17:41.528416 16060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0615 11:17:41.537418 16060 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:10240 CPUs:8 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0615 11:17:41.537518 16060 cri.go:52] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]} I0615 11:17:41.537740 16060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0615 11:17:41.565201 16060 cri.go:87] found id: "" I0615 11:17:41.565485 16060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0615 11:17:41.575560 16060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0615 11:17:41.583342 16060 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0615 11:17:41.583618 16060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0615 11:17:41.590985 16060 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0615 11:17:41.591028 16060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0615 11:17:41.637960 16060 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0615 11:17:41.638019 16060 kubeadm.go:322] [preflight] Running pre-flight checks I0615 11:17:41.743889 16060 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0615 11:17:41.743992 16060 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0615 11:17:41.744086 16060 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0615 11:17:41.855822 16060 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0615 11:17:41.866135 16060 out.go:204] ▪ Generating certificates and keys ... I0615 11:17:41.866317 16060 kubeadm.go:322] [certs] Using existing ca certificate authority I0615 11:17:41.866401 16060 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0615 11:17:42.017687 16060 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0615 11:17:42.098797 16060 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0615 11:17:42.231950 16060 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0615 11:17:42.322945 16060 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0615 11:17:42.392150 16060 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0615 11:17:42.392323 16060 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0615 11:17:42.594755 16060 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0615 11:17:42.594905 16060 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0615 11:17:42.755859 16060 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0615 11:17:42.811926 16060 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0615 11:17:42.972634 16060 kubeadm.go:322] [certs] Generating "sa" key and public key I0615 11:17:42.973482 16060 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0615 11:17:43.049007 16060 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0615 11:17:43.218865 16060 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0615 11:17:43.536892 16060 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0615 11:17:43.626280 16060 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0615 11:17:43.631753 16060 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0615 11:17:43.632218 16060 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0615 11:17:43.632267 16060 kubeadm.go:322] [kubelet-start] Starting the kubelet I0615 11:17:43.681097 16060 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0615 11:17:43.690374 16060 out.go:204] ▪ Booting up control plane ... I0615 11:17:43.690515 16060 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0615 11:17:43.690568 16060 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0615 11:17:43.690603 16060 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0615 11:17:43.690650 16060 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0615 11:17:43.690746 16060 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0615 11:17:47.686418 16060 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002606 seconds I0615 11:17:47.686595 16060 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0615 11:17:47.695303 16060 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0615 11:17:48.227437 16060 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0615 11:17:48.227674 16060 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0615 11:17:48.735040 16060 kubeadm.go:322] [bootstrap-token] Using token: iwdj0d.zu1sq1f22khcdg9m I0615 11:17:48.748010 16060 out.go:204] ▪ Configuring RBAC rules ... I0615 11:17:48.748129 16060 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0615 11:17:48.748228 16060 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0615 11:17:48.751203 16060 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0615 11:17:48.756023 16060 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0615 11:17:48.756114 16060 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0615 11:17:48.757682 16060 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0615 11:17:48.763810 16060 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0615 11:17:48.891751 16060 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0615 11:17:49.149560 16060 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0615 11:17:49.149572 16060 kubeadm.go:322] I0615 11:17:49.149638 16060 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0615 11:17:49.149642 16060 kubeadm.go:322] I0615 11:17:49.150116 16060 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0615 11:17:49.150121 16060 kubeadm.go:322] I0615 11:17:49.150161 16060 kubeadm.go:322] mkdir -p $HOME/.kube I0615 11:17:49.150263 16060 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0615 11:17:49.150334 16060 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0615 11:17:49.150339 16060 kubeadm.go:322] I0615 11:17:49.150426 16060 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0615 11:17:49.150432 16060 kubeadm.go:322] I0615 11:17:49.150497 16060 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0615 11:17:49.150502 16060 kubeadm.go:322] I0615 11:17:49.150576 16060 kubeadm.go:322] You should now deploy a pod network to the cluster. I0615 11:17:49.150690 16060 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0615 11:17:49.150785 16060 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0615 11:17:49.150790 16060 kubeadm.go:322] I0615 11:17:49.150920 16060 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0615 11:17:49.151069 16060 kubeadm.go:322] and service account keys on each node and then running the following as root: I0615 11:17:49.151077 16060 kubeadm.go:322] I0615 11:17:49.151195 16060 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token iwdj0d.zu1sq1f22khcdg9m \ I0615 11:17:49.151306 16060 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:6f7644d8c563d26ce2a0e287b5b3d099cf7cfa1ad2e38b3ba3246b1f263d8130 \ I0615 11:17:49.151327 16060 kubeadm.go:322] --control-plane I0615 11:17:49.151330 16060 kubeadm.go:322] I0615 11:17:49.151434 16060 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0615 11:17:49.151441 16060 kubeadm.go:322] I0615 11:17:49.151497 16060 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token iwdj0d.zu1sq1f22khcdg9m \ I0615 11:17:49.151560 16060 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:6f7644d8c563d26ce2a0e287b5b3d099cf7cfa1ad2e38b3ba3246b1f263d8130 I0615 11:17:49.154961 16060 kubeadm.go:322] W0615 09:17:41.632002 749 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration! I0615 11:17:49.155101 16060 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0615 11:17:49.155235 16060 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0615 11:17:49.155442 16060 kubeadm.go:322] W0615 09:17:43.631619 749 kubelet.go:63] [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5] I0615 11:17:49.155460 16060 cni.go:84] Creating CNI manager for "" I0615 11:17:49.155470 16060 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet I0615 11:17:49.165599 16060 out.go:177] 🔗 Configuring CNI (Container Networking Interface) ... I0615 11:17:49.170707 16060 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0615 11:17:49.175814 16060 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ... I0615 11:17:49.175826 16060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes) I0615 11:17:49.190880 16060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0615 11:17:49.604165 16060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0615 11:17:49.607374 16060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0615 11:17:49.607733 16060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=08896fd1dc362c097c925146c4a0d0dac715ace0 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_06_15T11_17_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0615 11:17:49.663694 16060 ops.go:34] apiserver oom_adj: -16 I0615 11:17:49.663716 16060 kubeadm.go:1073] duration metric: took 56.629292ms to wait for elevateKubeSystemPrivileges. I0615 11:17:49.667454 16060 kubeadm.go:403] StartCluster complete in 8.130093792s I0615 11:17:49.667481 16060 settings.go:142] acquiring lock: {Name:mk8530f7c7875286fed27d4acaa34a28bbd9add5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:49.667790 16060 settings.go:150] Updating kubeconfig: /Users/jedmeier/.kube/config I0615 11:17:49.670432 16060 lock.go:35] WriteFile acquiring /Users/jedmeier/.kube/config: {Name:mk2952b8aabe51ece99b1ca9db588cbc27be5854 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 11:17:49.670683 16060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0615 11:17:49.670733 16060 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] I0615 11:17:49.670765 16060 addons.go:66] Setting storage-provisioner=true in profile "minikube" I0615 11:17:49.670768 16060 addons.go:66] Setting default-storageclass=true in profile "minikube" I0615 11:17:49.670778 16060 addons.go:228] Setting addon storage-provisioner=true in "minikube" I0615 11:17:49.670779 16060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0615 11:17:49.670804 16060 host.go:66] Checking if "minikube" exists ... I0615 11:17:49.670812 16060 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.26.3 I0615 11:17:49.671033 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:49.671097 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:49.725932 16060 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0615 11:17:49.722175 16060 addons.go:228] Setting addon default-storageclass=true in "minikube" I0615 11:17:49.728327 16060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.254 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0615 11:17:49.731049 16060 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml I0615 11:17:49.731054 16060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0615 11:17:49.731079 16060 host.go:66] Checking if "minikube" exists ... I0615 11:17:49.731181 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:49.732201 16060 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0615 11:17:49.781083 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:49.781293 16060 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml I0615 11:17:49.781301 16060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0615 11:17:49.781417 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0615 11:17:49.827224 16060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54479 SSHKeyPath:/Users/jedmeier/.minikube/machines/minikube/id_rsa Username:docker} I0615 11:17:49.866151 16060 start.go:916] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap I0615 11:17:49.880869 16060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0615 11:17:49.925130 16060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0615 11:17:50.052113 16060 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0615 11:17:50.058583 16060 addons.go:499] enable addons completed in 387.843083ms: enabled=[storage-provisioner default-storageclass] I0615 11:17:50.188203 16060 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0615 11:17:50.188234 16060 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:crio ControlPlane:true Worker:true} I0615 11:17:50.194243 16060 out.go:177] 🔎 Verifying Kubernetes components... I0615 11:17:50.203573 16060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0615 11:17:50.218498 16060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0615 11:17:50.269411 16060 api_server.go:51] waiting for apiserver process to appear ... I0615 11:17:50.269512 16060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0615 11:17:50.279068 16060 api_server.go:71] duration metric: took 90.811292ms to wait for apiserver process to appear ... I0615 11:17:50.279076 16060 api_server.go:87] waiting for apiserver healthz status ... I0615 11:17:50.279085 16060 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54483/healthz ... I0615 11:17:50.283330 16060 api_server.go:278] https://127.0.0.1:54483/healthz returned 200: ok I0615 11:17:50.284205 16060 api_server.go:140] control plane version: v1.26.3 I0615 11:17:50.284210 16060 api_server.go:130] duration metric: took 5.1325ms to wait for apiserver health ... I0615 11:17:50.284213 16060 system_pods.go:43] waiting for kube-system pods to appear ... I0615 11:17:50.287396 16060 system_pods.go:59] 5 kube-system pods found I0615 11:17:50.287403 16060 system_pods.go:61] "etcd-minikube" [060e505e-f08b-496e-b65b-fe9110b09833] Pending I0615 11:17:50.287405 16060 system_pods.go:61] "kube-apiserver-minikube" [239d92ae-01b6-4729-819d-222c7aed31c8] Pending I0615 11:17:50.287408 16060 system_pods.go:61] "kube-controller-manager-minikube" [bf8879d8-7275-493a-aafe-d4f71e3ca029] Pending I0615 11:17:50.287409 16060 system_pods.go:61] "kube-scheduler-minikube" [4e63bedc-bf33-4b2b-9d94-85289583b7a8] Pending I0615 11:17:50.287411 16060 system_pods.go:61] "storage-provisioner" [03377a7c-6c12-4330-833a-e8f27dc4e21c] Pending I0615 11:17:50.287413 16060 system_pods.go:74] duration metric: took 3.197625ms to wait for pod list to return data ... I0615 11:17:50.287417 16060 kubeadm.go:578] duration metric: took 99.164459ms to wait for : map[apiserver:true system_pods:true] ... I0615 11:17:50.287423 16060 node_conditions.go:102] verifying NodePressure condition ... I0615 11:17:50.289396 16060 node_conditions.go:122] node storage ephemeral capacity is 123727180Ki I0615 11:17:50.289403 16060 node_conditions.go:123] node cpu capacity is 8 I0615 11:17:50.289411 16060 node_conditions.go:105] duration metric: took 1.985709ms to run NodePressure ... I0615 11:17:50.289415 16060 start.go:228] waiting for startup goroutines ... I0615 11:17:50.289418 16060 start.go:233] waiting for cluster config update ... I0615 11:17:50.289428 16060 start.go:242] writing updated cluster config ... I0615 11:17:50.289763 16060 ssh_runner.go:195] Run: rm -f paused I0615 11:17:50.401796 16060 start.go:568] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0) I0615 11:17:50.407789 16060 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> CRI-O <== * -- Logs begin at Thu 2023-06-15 09:17:37 UTC, end at Thu 2023-06-15 09:20:03 UTC. -- Jun 15 09:19:00 minikube crio[532]: time="2023-06-15 09:19:00.081929168Z" level=info msg="createCtr: deleting container ID c9702c692f4f6b6149ad7fa9c93835280ee9fb0a4844dff23778fff0a3f0098b from idIndex" id=8f25bae6-12a8-4d90-859b-752dcc7a4ee1 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:11 minikube crio[532]: time="2023-06-15 09:19:11.403878882Z" level=info msg="Removing container: b90ed7a3caf6dc782569dc62315b0d077fa6a108af58b43b0eb9bdb3e8ad1d5b" id=24228990-da33-4d2c-8e36-a856c14775ab name=/runtime.v1.RuntimeService/RemoveContainer Jun 15 09:19:11 minikube crio[532]: time="2023-06-15 09:19:11.432145548Z" level=info msg="Removed container b90ed7a3caf6dc782569dc62315b0d077fa6a108af58b43b0eb9bdb3e8ad1d5b: kube-system/kindnet-g56ps/kindnet-cni" id=24228990-da33-4d2c-8e36-a856c14775ab name=/runtime.v1.RuntimeService/RemoveContainer Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.966391801Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=3c091438-645b-4a0a-b4a9-645dec9776a1 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.967299884Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3c091438-645b-4a0a-b4a9-645dec9776a1 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.970272467Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=c1a3ea3d-c6fa-458a-b722-72065ea89136 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.970577676Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c1a3ea3d-c6fa-458a-b722-72065ea89136 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.973280301Z" level=info msg="Creating container: kube-system/kube-proxy-5pg9r/kube-proxy" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:15 minikube crio[532]: time="2023-06-15 09:19:15.975311509Z" level=warning msg="Allowed annotations are specified for workload []" Jun 15 09:19:16 minikube crio[532]: time="2023-06-15 09:19:16.048500842Z" level=error msg="Container creation error: time=\"2023-06-15T09:19:16Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:16 minikube crio[532]: time="2023-06-15 09:19:16.059476926Z" level=info msg="createCtr: deleting container ID cd76750c86d9dab384b2bd98031415535a3259a1adefbc114b8129964c02e613 from idIndex" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:16 minikube crio[532]: time="2023-06-15 09:19:16.059514801Z" level=info msg="createCtr: deleting container ID cd76750c86d9dab384b2bd98031415535a3259a1adefbc114b8129964c02e613 from idIndex" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:16 minikube crio[532]: time="2023-06-15 09:19:16.059531926Z" level=info msg="createCtr: deleting container ID cd76750c86d9dab384b2bd98031415535a3259a1adefbc114b8129964c02e613 from idIndex" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:16 minikube crio[532]: time="2023-06-15 09:19:16.062708051Z" level=info msg="createCtr: deleting container ID cd76750c86d9dab384b2bd98031415535a3259a1adefbc114b8129964c02e613 from idIndex" id=77965401-9ec1-4632-9e5a-30aac1d04550 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.969014304Z" level=info msg="Checking image status: kindest/kindnetd:v20230330-48f316cd" id=c7d840b0-7d90-4875-83ca-629509c41cdb name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.969310138Z" level=info msg="Resolving \"kindest/kindnetd\" using unqualified-search registries (/etc/containers/registries.conf)" Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.969886679Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:43ef1c5209cd90af8def81fd342b60001709c906ed45d8fde3ba56fd824eef0b,RepoTags:[docker.io/kindest/kindnetd:v20230330-48f316cd],RepoDigests:[docker.io/kindest/kindnetd@sha256:5149f27d2a55574f79d4f1535ca03c4afaada8e9ba6c3c699788ab0362d9f7ae docker.io/kindest/kindnetd@sha256:c19d6362a6a928139820761475a38c24c0cf84d507b9ddf414a078cf627497af],Size_:60481377,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c7d840b0-7d90-4875-83ca-629509c41cdb name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.973326554Z" level=info msg="Checking image status: kindest/kindnetd:v20230330-48f316cd" id=f297cbcb-422e-4d88-8108-43623101571c name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.973693346Z" level=info msg="Resolving \"kindest/kindnetd\" using unqualified-search registries (/etc/containers/registries.conf)" Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.973916388Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:43ef1c5209cd90af8def81fd342b60001709c906ed45d8fde3ba56fd824eef0b,RepoTags:[docker.io/kindest/kindnetd:v20230330-48f316cd],RepoDigests:[docker.io/kindest/kindnetd@sha256:5149f27d2a55574f79d4f1535ca03c4afaada8e9ba6c3c699788ab0362d9f7ae docker.io/kindest/kindnetd@sha256:c19d6362a6a928139820761475a38c24c0cf84d507b9ddf414a078cf627497af],Size_:60481377,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f297cbcb-422e-4d88-8108-43623101571c name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.979084846Z" level=info msg="Creating container: kube-system/kindnet-g56ps/kindnet-cni" id=636ae13f-bd97-4910-860d-2fe00681cba2 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:23 minikube crio[532]: time="2023-06-15 09:19:23.980391013Z" level=warning msg="Allowed annotations are specified for workload []" Jun 15 09:19:24 minikube crio[532]: time="2023-06-15 09:19:24.086082763Z" level=info msg="Created container eb0aa1b8e04143faf8f8ca170f215f465f760b015c1e9f50ba70b8d25cba0d70: kube-system/kindnet-g56ps/kindnet-cni" id=636ae13f-bd97-4910-860d-2fe00681cba2 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:24 minikube crio[532]: time="2023-06-15 09:19:24.086571971Z" level=info msg="Starting container: eb0aa1b8e04143faf8f8ca170f215f465f760b015c1e9f50ba70b8d25cba0d70" id=83468cce-a421-448a-be61-8ca53b76eeed name=/runtime.v1.RuntimeService/StartContainer Jun 15 09:19:24 minikube crio[532]: time="2023-06-15 09:19:24.105661888Z" level=info msg="Started container" PID=1947 containerID=eb0aa1b8e04143faf8f8ca170f215f465f760b015c1e9f50ba70b8d25cba0d70 description=kube-system/kindnet-g56ps/kindnet-cni id=83468cce-a421-448a-be61-8ca53b76eeed name=/runtime.v1.RuntimeService/StartContainer sandboxID=1175864cfedcd67fd144871df8a0ea66dbf59544a79757f11ff2a16536e4e02c Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.969617168Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=c331ea07-a127-4b8e-a3d2-7bcafb885258 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.971974210Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c331ea07-a127-4b8e-a3d2-7bcafb885258 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.975215293Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=a73a910f-e461-4464-ab33-e12faeb01512 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.975692710Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a73a910f-e461-4464-ab33-e12faeb01512 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.982160876Z" level=info msg="Creating container: kube-system/kube-proxy-5pg9r/kube-proxy" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:29 minikube crio[532]: time="2023-06-15 09:19:29.982317043Z" level=warning msg="Allowed annotations are specified for workload []" Jun 15 09:19:30 minikube crio[532]: time="2023-06-15 09:19:30.072973001Z" level=error msg="Container creation error: time=\"2023-06-15T09:19:30Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:30 minikube crio[532]: time="2023-06-15 09:19:30.085621335Z" level=info msg="createCtr: deleting container ID 467d3ffd880c9ced6cb90956a89d78df3fa671447e7a07418cab4f2fa32313f2 from idIndex" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:30 minikube crio[532]: time="2023-06-15 09:19:30.085655543Z" level=info msg="createCtr: deleting container ID 467d3ffd880c9ced6cb90956a89d78df3fa671447e7a07418cab4f2fa32313f2 from idIndex" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:30 minikube crio[532]: time="2023-06-15 09:19:30.085668793Z" level=info msg="createCtr: deleting container ID 467d3ffd880c9ced6cb90956a89d78df3fa671447e7a07418cab4f2fa32313f2 from idIndex" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:30 minikube crio[532]: time="2023-06-15 09:19:30.088441668Z" level=info msg="createCtr: deleting container ID 467d3ffd880c9ced6cb90956a89d78df3fa671447e7a07418cab4f2fa32313f2 from idIndex" id=18244938-42c4-4026-8242-b75936b7a390 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.969120257Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=f7851f43-f120-458c-94c0-3abb88b12d4e name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.969446215Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f7851f43-f120-458c-94c0-3abb88b12d4e name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.971936715Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=a38bd928-b8ea-47fa-95ea-a8053b633220 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.972901632Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a38bd928-b8ea-47fa-95ea-a8053b633220 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.975009132Z" level=info msg="Creating container: kube-system/kube-proxy-5pg9r/kube-proxy" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:40 minikube crio[532]: time="2023-06-15 09:19:40.975302173Z" level=warning msg="Allowed annotations are specified for workload []" Jun 15 09:19:41 minikube crio[532]: time="2023-06-15 09:19:41.064341048Z" level=error msg="Container creation error: time=\"2023-06-15T09:19:41Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:41 minikube crio[532]: time="2023-06-15 09:19:41.076550965Z" level=info msg="createCtr: deleting container ID 35b31ce8d1d6f36fefd2a36771ac8b57251d721c08772e9d3b40a0c927825c23 from idIndex" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:41 minikube crio[532]: time="2023-06-15 09:19:41.076589465Z" level=info msg="createCtr: deleting container ID 35b31ce8d1d6f36fefd2a36771ac8b57251d721c08772e9d3b40a0c927825c23 from idIndex" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:41 minikube crio[532]: time="2023-06-15 09:19:41.076599965Z" level=info msg="createCtr: deleting container ID 35b31ce8d1d6f36fefd2a36771ac8b57251d721c08772e9d3b40a0c927825c23 from idIndex" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:41 minikube crio[532]: time="2023-06-15 09:19:41.079704965Z" level=info msg="createCtr: deleting container ID 35b31ce8d1d6f36fefd2a36771ac8b57251d721c08772e9d3b40a0c927825c23 from idIndex" id=01bb4f9e-1303-4a49-9432-03ca64c4c58d name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.969332470Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=717b2d51-8795-4dca-853e-b953b2079b1c name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.973861845Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=717b2d51-8795-4dca-853e-b953b2079b1c name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.978953178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.26.3" id=445c5ffe-9afe-49c1-8ef6-41dba5817fc9 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.979332845Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c859f97be11acc1b39835c42508c82f74a8352edc1d93fc07e3f605bb1c74a24,RepoTags:[registry.k8s.io/kube-proxy:v1.26.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:0814fd02ea64e0f5ab6e0313fc28652a0e50ea2353456b44ab18c654fb508e51 registry.k8s.io/kube-proxy@sha256:d89b6c6a8ecc920753df713b268b0d226f795135c4a0ecc5ce61660e623dd6da],Size_:63446245,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=445c5ffe-9afe-49c1-8ef6-41dba5817fc9 name=/runtime.v1.ImageService/ImageStatus Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.980588262Z" level=info msg="Creating container: kube-system/kube-proxy-5pg9r/kube-proxy" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:51 minikube crio[532]: time="2023-06-15 09:19:51.980800053Z" level=warning msg="Allowed annotations are specified for workload []" Jun 15 09:19:52 minikube crio[532]: time="2023-06-15 09:19:52.065591304Z" level=error msg="Container creation error: time=\"2023-06-15T09:19:52Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:52 minikube crio[532]: time="2023-06-15 09:19:52.076904429Z" level=info msg="createCtr: deleting container ID 87d94d36a671c6261597f5d1f246358071b497df4e49ab620f46017ca7d2aef2 from idIndex" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:52 minikube crio[532]: time="2023-06-15 09:19:52.076941970Z" level=info msg="createCtr: deleting container ID 87d94d36a671c6261597f5d1f246358071b497df4e49ab620f46017ca7d2aef2 from idIndex" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:52 minikube crio[532]: time="2023-06-15 09:19:52.076953054Z" level=info msg="createCtr: deleting container ID 87d94d36a671c6261597f5d1f246358071b497df4e49ab620f46017ca7d2aef2 from idIndex" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:52 minikube crio[532]: time="2023-06-15 09:19:52.079999762Z" level=info msg="createCtr: deleting container ID 87d94d36a671c6261597f5d1f246358071b497df4e49ab620f46017ca7d2aef2 from idIndex" id=b4632151-d800-4065-9c02-5b6587a1de07 name=/runtime.v1.RuntimeService/CreateContainer Jun 15 09:19:55 minikube crio[532]: time="2023-06-15 09:19:55.675799764Z" level=info msg="Removing container: 1ff014fb8f71dbb86fad35919b82b93d6efabc57b1cf6ad7dd6b0a0dea8c586e" id=a4671dd3-6f71-4840-866c-a6e8dcbd14eb name=/runtime.v1.RuntimeService/RemoveContainer Jun 15 09:19:55 minikube crio[532]: time="2023-06-15 09:19:55.707390680Z" level=info msg="Removed container 1ff014fb8f71dbb86fad35919b82b93d6efabc57b1cf6ad7dd6b0a0dea8c586e: kube-system/kindnet-g56ps/kindnet-cni" id=a4671dd3-6f71-4840-866c-a6e8dcbd14eb name=/runtime.v1.RuntimeService/RemoveContainer * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID eb0aa1b8e0414 43ef1c5209cd90af8def81fd342b60001709c906ed45d8fde3ba56fd824eef0b 39 seconds ago Exited kindnet-cni 2 1175864cfedcd bd35f08db87d3 fa167119f9a55e258bd7fae3b27525c5f0a6a41cbb1992dc8f300b4936cc8876 2 minutes ago Running kube-scheduler 0 2e2ba9a0c1818 ba16aa1b0d8ae 3b6ac91ff8d39cc54735bbc7a3beaf777c6213ac5edad185c281145102ce479b 2 minutes ago Running kube-controller-manager 0 eb63b498ea6eb 0fb8d0f481438 3f1ae10c5c85dc611809282b774bb6c8637bc02b40a202e1f110575b2a2df5a2 2 minutes ago Running kube-apiserver 0 4b171d0587db9 e8cd86e14521c ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb 2 minutes ago Running etcd 0 4a4165fd5ce19 * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=08896fd1dc362c097c925146c4a0d0dac715ace0 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_06_15T11_17_49_0700 minikube.k8s.io/version=v1.30.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 15 Jun 2023 09:17:46 +0000 Taints: node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 15 Jun 2023 09:20:01 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 15 Jun 2023 09:18:19 +0000 Thu, 15 Jun 2023 09:17:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 15 Jun 2023 09:18:19 +0000 Thu, 15 Jun 2023 09:17:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 15 Jun 2023 09:18:19 +0000 Thu, 15 Jun 2023 09:17:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Thu, 15 Jun 2023 09:18:19 +0000 Thu, 15 Jun 2023 09:17:44 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started? Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 8 ephemeral-storage: 123727180Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 12250056Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 123727180Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 12250056Ki pods: 110 System Info: Machine ID: 61419f744ec9452499a59356fc030992 System UUID: 61419f744ec9452499a59356fc030992 Boot ID: fb36f2e7-e3fd-4ea7-8034-17a3d14428d1 Kernel Version: 5.15.49-linuxkit OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: arm64 Container Runtime Version: cri-o://1.24.4 Kubelet Version: v1.26.3 Kube-Proxy Version: v1.26.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 2m14s kube-system kindnet-g56ps 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m1s kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m16s kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m16s kube-system kube-proxy-5pg9r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m14s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING) memory 150Mi (1%!)(MISSING) 50Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m15s kubelet Starting kubelet. Normal NodeHasSufficientMemory 2m14s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m14s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m14s kubelet Node minikube status is now: NodeHasSufficientPID Normal RegisteredNode 2m1s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [ +0.000002] handle_mm_fault+0x148/0x19c [ +0.000001] do_page_fault+0x2dc/0x3fc [ +0.000002] do_translation_fault+0x5c/0x80 [ +0.000001] do_mem_abort+0x5c/0xc4 [ +0.000001] el0_da+0x2c/0x58 [ +0.000001] el0t_64_sync_handler+0x150/0x1c4 [ +0.000001] el0t_64_sync+0x1a0/0x1a4 [ +0.000041] Memory cgroup out of memory: Killed process 3711719 (stress-ng) total-vm:72512kB, anon-rss:41656kB, file-rss:1100kB, shmem-rss:12kB, UID:0 pgtables:144kB oom_score_adj:1000 [ +0.042074] stress-ng invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=1000 [ +0.000007] CPU: 2 PID: 3711720 Comm: stress-ng Tainted: G O T 5.15.49-linuxkit #1 [ +0.000002] Call trace: [ +0.000001] dump_backtrace+0x0/0x1b8 [ +0.000003] show_stack+0x34/0x44 [ +0.000002] dump_stack_lvl+0x68/0x84 [ +0.000002] dump_stack+0x18/0x34 [ +0.000001] dump_header+0x4c/0x1f8 [ +0.000001] oom_kill_process+0xac/0x170 [ +0.000003] out_of_memory+0x28c/0x2bc [ +0.000001] mem_cgroup_out_of_memory+0x8c/0xd0 [ +0.000002] try_charge_memcg+0x434/0x4d4 [ +0.000002] try_charge+0x40/0x5c [ +0.000001] charge_memcg+0x44/0x8c [ +0.000002] __mem_cgroup_charge+0x48/0x6c [ +0.000001] mem_cgroup_charge.constprop.0+0x34/0x4c [ +0.000002] __handle_mm_fault+0x394/0x818 [ +0.000002] handle_mm_fault+0x148/0x19c [ +0.000001] do_page_fault+0x2dc/0x3fc [ +0.000002] do_translation_fault+0x5c/0x80 [ +0.000001] do_mem_abort+0x5c/0xc4 [ +0.000001] el0_da+0x2c/0x58 [ +0.000001] el0t_64_sync_handler+0x150/0x1c4 [ +0.000001] el0t_64_sync+0x1a0/0x1a4 [ +0.000044] Memory cgroup out of memory: Killed process 3711720 (stress-ng) total-vm:72512kB, anon-rss:41656kB, file-rss:1100kB, shmem-rss:12kB, UID:0 pgtables:144kB oom_score_adj:1000 [ +0.043430] stress-ng invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=1000 [ +0.000006] CPU: 4 PID: 3711721 Comm: stress-ng Tainted: G O T 5.15.49-linuxkit #1 [ +0.000003] Call trace: [ +0.000001] dump_backtrace+0x0/0x1b8 [ +0.000003] show_stack+0x34/0x44 [ +0.000001] dump_stack_lvl+0x68/0x84 [ +0.000002] dump_stack+0x18/0x34 [ +0.000001] dump_header+0x4c/0x1f8 [ +0.000001] oom_kill_process+0xac/0x170 [ +0.000002] out_of_memory+0x28c/0x2bc [ +0.000002] mem_cgroup_out_of_memory+0x8c/0xd0 [ +0.000002] try_charge_memcg+0x434/0x4d4 [ +0.000001] try_charge+0x40/0x5c [ +0.000001] charge_memcg+0x44/0x8c [ +0.000002] __mem_cgroup_charge+0x48/0x6c [ +0.000001] mem_cgroup_charge.constprop.0+0x34/0x4c [ +0.000002] __handle_mm_fault+0x394/0x818 [ +0.000001] handle_mm_fault+0x148/0x19c [ +0.000002] do_page_fault+0x2dc/0x3fc [ +0.000001] do_translation_fault+0x5c/0x80 [ +0.000001] do_mem_abort+0x5c/0xc4 [ +0.000001] el0_da+0x2c/0x58 [ +0.000001] el0t_64_sync_handler+0x150/0x1c4 [ +0.000001] el0t_64_sync+0x1a0/0x1a4 [ +0.000037] Memory cgroup out of memory: Killed process 3711721 (stress-ng) total-vm:72512kB, anon-rss:41656kB, file-rss:1100kB, shmem-rss:12kB, UID:0 pgtables:144kB oom_score_adj:1000 [ +0.043057] Memory cgroup out of memory: Killed process 3711722 (stress-ng) total-vm:72512kB, anon-rss:41656kB, file-rss:1100kB, shmem-rss:12kB, UID:0 pgtables:144kB oom_score_adj:1000 [ +0.042566] Memory cgroup out of memory: Killed process 3711725 (stress-ng) total-vm:72512kB, anon-rss:41656kB, file-rss:1100kB, shmem-rss:12kB, UID:0 pgtables:144kB oom_score_adj:1000 * * ==> etcd [e8cd86e14521c099be743583f86d2d9cbc9d72506b2542d0806d1016402c6648] <== * {"level":"warn","ts":"2023-06-15T09:17:44.650Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"} {"level":"info","ts":"2023-06-15T09:17:44.650Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-06-15T09:17:44.650Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-06-15T09:17:44.650Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-06-15T09:17:44.650Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2023-06-15T09:17:44.651Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-06-15T09:17:44.653Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.076458ms"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2023-06-15T09:17:44.655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2023-06-15T09:17:44.657Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-06-15T09:17:44.658Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-06-15T09:17:44.658Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-06-15T09:17:44.659Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-06-15T09:17:44.659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-06-15T09:17:44.659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-06-15T09:17:44.659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-06-15T09:17:44.659Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-06-15T09:17:44.660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2023-06-15T09:17:44.660Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-06-15T09:17:44.660Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-06-15T09:17:44.660Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-06-15T09:17:44.660Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-06-15T09:17:44.661Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-06-15T09:17:44.661Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2023-06-15T09:17:45.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-06-15T09:17:45.460Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-06-15T09:17:45.461Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2023-06-15T09:17:45.461Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-06-15T09:17:45.461Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-06-15T09:17:45.461Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2023-06-15T09:17:45.461Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"} * * ==> kernel <== * 09:20:03 up 9 days, 20:45, 0 users, load average: 0.35, 0.58, 0.33 Linux minikube 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kindnet [eb0aa1b8e04143faf8f8ca170f215f465f760b015c1e9f50ba70b8d25cba0d70] <== * I0615 09:19:24.181143 1 main.go:102] connected to apiserver: https://10.96.0.1:443 I0615 09:19:24.181175 1 main.go:107] hostIP = 192.168.49.2 podIP = 192.168.49.2 I0615 09:19:24.181334 1 main.go:116] setting mtu 65535 for CNI I0615 09:19:24.181344 1 main.go:146] kindnetd IP family: "ipv4" I0615 09:19:24.181353 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16] I0615 09:19:28.590075 1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused I0615 09:19:32.603507 1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused I0615 09:19:37.611497 1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused I0615 09:19:43.624675 1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused I0615 09:19:50.645707 1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused goroutine 1 [running]: main.main() /go/src/cmd/kindnetd/main.go:195 +0xab4 * * ==> kube-apiserver [0fb8d0f481438e445730ed257e3081cc8bd6e86fab302f43981d506e79ee0740] <== * W0615 09:17:45.837695 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0615 09:17:46.191096 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0615 09:17:46.191145 1 secure_serving.go:210] Serving securely on [::]:8443 I0615 09:17:46.191153 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0615 09:17:46.191176 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0615 09:17:46.191181 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0615 09:17:46.191197 1 apf_controller.go:361] Starting API Priority and Fairness config controller I0615 09:17:46.191391 1 customresource_discovery_controller.go:288] Starting DiscoveryController I0615 09:17:46.191745 1 controller.go:85] Starting OpenAPI controller I0615 09:17:46.191772 1 controller.go:85] Starting OpenAPI V3 controller I0615 09:17:46.191797 1 naming_controller.go:291] Starting NamingConditionController I0615 09:17:46.191822 1 establishing_controller.go:76] Starting EstablishingController I0615 09:17:46.191854 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0615 09:17:46.191884 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0615 09:17:46.191900 1 crd_finalizer.go:266] Starting CRDFinalizer I0615 09:17:46.191911 1 gc_controller.go:78] Starting apiserver lease garbage collector I0615 09:17:46.191952 1 autoregister_controller.go:141] Starting autoregister controller I0615 09:17:46.191969 1 cache.go:32] Waiting for caches to sync for autoregister controller I0615 09:17:46.192473 1 controller.go:83] Starting OpenAPI AggregationController I0615 09:17:46.191466 1 available_controller.go:494] Starting AvailableConditionController I0615 09:17:46.192569 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0615 09:17:46.198018 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0615 09:17:46.198069 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister I0615 09:17:46.191489 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0615 09:17:46.198247 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0615 09:17:46.191622 1 controller.go:121] Starting legacy_token_tracking_controller I0615 09:17:46.198283 1 shared_informer.go:273] Waiting for caches to sync for configmaps I0615 09:17:46.200336 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0615 09:17:46.200454 1 controller.go:80] Starting OpenAPI V3 AggregationController I0615 09:17:46.203564 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0615 09:17:46.203606 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller I0615 09:17:46.204925 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0615 09:17:46.204978 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0615 09:17:46.207434 1 controller.go:615] quota admission added evaluator for: namespaces I0615 09:17:46.230435 1 shared_informer.go:280] Caches are synced for node_authorizer I0615 09:17:46.258251 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0615 09:17:46.292475 1 apf_controller.go:366] Running API Priority and Fairness config worker I0615 09:17:46.292513 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process I0615 09:17:46.292476 1 cache.go:39] Caches are synced for autoregister controller I0615 09:17:46.292641 1 cache.go:39] Caches are synced for AvailableConditionController controller I0615 09:17:46.298113 1 shared_informer.go:280] Caches are synced for crd-autoregister I0615 09:17:46.298285 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0615 09:17:46.298306 1 shared_informer.go:280] Caches are synced for configmaps I0615 09:17:46.303643 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller I0615 09:17:47.071089 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0615 09:17:47.206212 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0615 09:17:47.208978 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0615 09:17:47.208991 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0615 09:17:47.399023 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0615 09:17:47.415685 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0615 09:17:47.515393 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0615 09:17:47.518480 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0615 09:17:47.518932 1 controller.go:615] quota admission added evaluator for: endpoints I0615 09:17:47.520860 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0615 09:17:48.232719 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0615 09:17:48.885359 1 controller.go:615] quota admission added evaluator for: deployments.apps I0615 09:17:48.890859 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0615 09:17:48.895379 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0615 09:18:02.599286 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0615 09:18:02.687997 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps * * ==> kube-controller-manager [ba16aa1b0d8aecf4ac4504d7907b91c94fb36bfd91159ac3a7bc986e0f0467bc] <== * I0615 09:18:02.395971 1 controllermanager.go:622] Started "nodeipam" I0615 09:18:02.396136 1 node_ipam_controller.go:155] Starting ipam controller I0615 09:18:02.396147 1 shared_informer.go:273] Waiting for caches to sync for node I0615 09:18:02.397715 1 shared_informer.go:273] Waiting for caches to sync for resource quota W0615 09:18:02.405759 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0615 09:18:02.410720 1 shared_informer.go:273] Waiting for caches to sync for garbage collector I0615 09:18:02.430660 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving I0615 09:18:02.430898 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client I0615 09:18:02.430919 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown I0615 09:18:02.431343 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client I0615 09:18:02.444255 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring I0615 09:18:02.445520 1 shared_informer.go:280] Caches are synced for namespace I0615 09:18:02.459646 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator I0615 09:18:02.463737 1 shared_informer.go:280] Caches are synced for PV protection I0615 09:18:02.466899 1 shared_informer.go:280] Caches are synced for crt configmap I0615 09:18:02.474342 1 shared_informer.go:280] Caches are synced for service account I0615 09:18:02.477627 1 shared_informer.go:280] Caches are synced for certificate-csrapproving I0615 09:18:02.481704 1 shared_informer.go:280] Caches are synced for TTL I0615 09:18:02.481746 1 shared_informer.go:280] Caches are synced for bootstrap_signer I0615 09:18:02.482082 1 shared_informer.go:280] Caches are synced for expand I0615 09:18:02.496266 1 shared_informer.go:280] Caches are synced for node I0615 09:18:02.496346 1 range_allocator.go:167] Sending events to api server. I0615 09:18:02.496375 1 range_allocator.go:171] Starting range CIDR allocator I0615 09:18:02.496384 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator I0615 09:18:02.496393 1 shared_informer.go:280] Caches are synced for cidrallocator I0615 09:18:02.500837 1 range_allocator.go:372] Set node minikube PodCIDR to [10.244.0.0/24] I0615 09:18:02.550077 1 shared_informer.go:280] Caches are synced for PVC protection I0615 09:18:02.555623 1 shared_informer.go:280] Caches are synced for taint I0615 09:18:02.555682 1 taint_manager.go:206] "Starting NoExecuteTaintManager" I0615 09:18:02.555711 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: I0615 09:18:02.555731 1 taint_manager.go:211] "Sending events to api server" W0615 09:18:02.555741 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube. Assuming now as a timestamp. I0615 09:18:02.555763 1 node_lifecycle_controller.go:1204] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0615 09:18:02.555789 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0615 09:18:02.560907 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-minikube" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0615 09:18:02.570355 1 shared_informer.go:280] Caches are synced for GC I0615 09:18:02.581916 1 shared_informer.go:280] Caches are synced for cronjob I0615 09:18:02.581987 1 shared_informer.go:280] Caches are synced for disruption I0615 09:18:02.583471 1 shared_informer.go:280] Caches are synced for TTL after finished I0615 09:18:02.584885 1 shared_informer.go:280] Caches are synced for ephemeral I0615 09:18:02.590986 1 shared_informer.go:280] Caches are synced for deployment I0615 09:18:02.593596 1 shared_informer.go:280] Caches are synced for persistent volume I0615 09:18:02.602130 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1" I0615 09:18:02.631688 1 shared_informer.go:280] Caches are synced for endpoint I0615 09:18:02.631721 1 shared_informer.go:280] Caches are synced for ReplicationController I0615 09:18:02.631698 1 shared_informer.go:280] Caches are synced for HPA I0615 09:18:02.633047 1 shared_informer.go:280] Caches are synced for endpoint_slice I0615 09:18:02.633259 1 shared_informer.go:280] Caches are synced for attach detach I0615 09:18:02.637174 1 shared_informer.go:280] Caches are synced for job I0615 09:18:02.643272 1 shared_informer.go:280] Caches are synced for resource quota I0615 09:18:02.647125 1 shared_informer.go:280] Caches are synced for ReplicaSet I0615 09:18:02.654454 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-dxlnb" I0615 09:18:02.681428 1 shared_informer.go:280] Caches are synced for daemon sets I0615 09:18:02.688612 1 shared_informer.go:280] Caches are synced for stateful set I0615 09:18:02.692871 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5pg9r" I0615 09:18:02.695442 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g56ps" I0615 09:18:02.698263 1 shared_informer.go:280] Caches are synced for resource quota I0615 09:18:03.011812 1 shared_informer.go:280] Caches are synced for garbage collector I0615 09:18:03.039189 1 shared_informer.go:280] Caches are synced for garbage collector I0615 09:18:03.039217 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * * ==> kube-scheduler [bd35f08db87d3ec6bde49b2d0025603c2595c08df8a3355cbb59d91765d0363b] <== * I0615 09:17:45.153171 1 serving.go:348] Generated self-signed cert in-memory W0615 09:17:46.207107 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0615 09:17:46.207172 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0615 09:17:46.207190 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous. W0615 09:17:46.207204 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0615 09:17:46.212787 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3" I0615 09:17:46.212848 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0615 09:17:46.213553 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0615 09:17:46.213569 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0615 09:17:46.213610 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0615 09:17:46.213657 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0615 09:17:46.214229 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0615 09:17:46.214266 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0615 09:17:46.214954 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0615 09:17:46.215206 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0615 09:17:46.215236 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0615 09:17:46.215256 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0615 09:17:46.215253 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0615 09:17:46.215273 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0615 09:17:46.215177 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0615 09:17:46.215350 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0615 09:17:46.215358 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0615 09:17:46.215362 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0615 09:17:46.215750 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0615 09:17:46.215858 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0615 09:17:46.215868 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0615 09:17:46.215871 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0615 09:17:46.215895 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0615 09:17:46.215901 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0615 09:17:46.215922 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0615 09:17:46.215926 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0615 09:17:46.215943 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0615 09:17:46.215960 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0615 09:17:46.215951 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0615 09:17:46.215970 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0615 09:17:46.215974 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0615 09:17:46.215979 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0615 09:17:46.215999 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0615 09:17:46.216025 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0615 09:17:46.216008 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0615 09:17:46.216036 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0615 09:17:47.148354 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0615 09:17:47.148443 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0615 09:17:47.240380 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0615 09:17:47.240449 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0615 09:17:47.271177 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0615 09:17:47.271204 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0615 09:17:50.415248 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Thu 2023-06-15 09:17:37 UTC, end at Thu 2023-06-15 09:20:03 UTC. -- Jun 15 09:18:03 minikube kubelet[1172]: E0615 09:18:03.109149 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:18:03Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:03 minikube kubelet[1172]: E0615 09:18:03.109193 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:18:03Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:18:04 minikube kubelet[1172]: E0615 09:18:04.125415 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:18:04 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:18:04Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:04 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:18:04 minikube kubelet[1172]: E0615 09:18:04.125512 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:18:04Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:04 minikube kubelet[1172]: E0615 09:18:04.125538 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:18:04Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:18:10 minikube kubelet[1172]: I0615 09:18:10.064519 1172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-g56ps" podStartSLOduration=-9.223372028790321e+09 pod.CreationTimestamp="2023-06-15 09:18:02 +0000 UTC" firstStartedPulling="2023-06-15 09:18:03.033508586 +0000 UTC m=+14.156702090" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-15 09:18:10.064071798 +0000 UTC m=+21.187265301" watchObservedRunningTime="2023-06-15 09:18:10.064455006 +0000 UTC m=+21.187648551" Jun 15 09:18:19 minikube kubelet[1172]: E0615 09:18:19.100028 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:18:19 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:18:19Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:19 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:18:19 minikube kubelet[1172]: E0615 09:18:19.100123 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:18:19Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:19 minikube kubelet[1172]: E0615 09:18:19.100148 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:18:19Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:18:34 minikube kubelet[1172]: E0615 09:18:34.084494 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:18:34 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:18:34Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:34 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:18:34 minikube kubelet[1172]: E0615 09:18:34.084600 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:18:34Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:34 minikube kubelet[1172]: E0615 09:18:34.084629 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:18:34Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:18:40 minikube kubelet[1172]: I0615 09:18:40.233721 1172 scope.go:115] "RemoveContainer" containerID="b90ed7a3caf6dc782569dc62315b0d077fa6a108af58b43b0eb9bdb3e8ad1d5b" Jun 15 09:18:48 minikube kubelet[1172]: E0615 09:18:48.091783 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:18:48 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:18:48Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:48 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:18:48 minikube kubelet[1172]: E0615 09:18:48.091877 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:18:48Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:18:48 minikube kubelet[1172]: E0615 09:18:48.091948 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:18:48Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:00 minikube kubelet[1172]: E0615 09:19:00.082204 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:19:00 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:19:00Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:00 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:19:00 minikube kubelet[1172]: E0615 09:19:00.082313 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:19:00Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:00 minikube kubelet[1172]: E0615 09:19:00.082339 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:19:00Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:11 minikube kubelet[1172]: I0615 09:19:11.402355 1172 scope.go:115] "RemoveContainer" containerID="b90ed7a3caf6dc782569dc62315b0d077fa6a108af58b43b0eb9bdb3e8ad1d5b" Jun 15 09:19:11 minikube kubelet[1172]: I0615 09:19:11.402713 1172 scope.go:115] "RemoveContainer" containerID="1ff014fb8f71dbb86fad35919b82b93d6efabc57b1cf6ad7dd6b0a0dea8c586e" Jun 15 09:19:11 minikube kubelet[1172]: E0615 09:19:11.403038 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-g56ps_kube-system(44bfe21e-1f2d-42bd-badf-0a0bd6a2cb11)\"" pod="kube-system/kindnet-g56ps" podUID=44bfe21e-1f2d-42bd-badf-0a0bd6a2cb11 Jun 15 09:19:16 minikube kubelet[1172]: E0615 09:19:16.062949 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:19:16 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:19:16Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:16 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:19:16 minikube kubelet[1172]: E0615 09:19:16.063046 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:19:16Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:16 minikube kubelet[1172]: E0615 09:19:16.063074 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:19:16Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:23 minikube kubelet[1172]: I0615 09:19:23.966224 1172 scope.go:115] "RemoveContainer" containerID="1ff014fb8f71dbb86fad35919b82b93d6efabc57b1cf6ad7dd6b0a0dea8c586e" Jun 15 09:19:30 minikube kubelet[1172]: E0615 09:19:30.088682 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:19:30 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:19:30Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:30 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:19:30 minikube kubelet[1172]: E0615 09:19:30.088801 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:19:30Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:30 minikube kubelet[1172]: E0615 09:19:30.088827 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:19:30Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:41 minikube kubelet[1172]: E0615 09:19:41.079942 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:19:41 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:19:41Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:41 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:19:41 minikube kubelet[1172]: E0615 09:19:41.080038 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:19:41Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:41 minikube kubelet[1172]: E0615 09:19:41.080067 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:19:41Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:49 minikube kubelet[1172]: E0615 09:19:49.018829 1172 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Jun 15 09:19:49 minikube kubelet[1172]: E0615 09:19:49.049001 1172 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" Jun 15 09:19:52 minikube kubelet[1172]: E0615 09:19:52.080253 1172 remote_runtime.go:302] "CreateContainer in sandbox from runtime service failed" err=< Jun 15 09:19:52 minikube kubelet[1172]: rpc error: code = Unknown desc = container create failed: time="2023-06-15T09:19:52Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:52 minikube kubelet[1172]: > podSandboxID="191adc5c8084bd37fdea2c7753db676746d51aa231ea55f66d9ad0445d33b134" Jun 15 09:19:52 minikube kubelet[1172]: E0615 09:19:52.080366 1172 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wcwxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-5pg9r_kube-system(10aa4649-83dd-4b5e-b390-902d937c33f2): CreateContainerError: container create failed: time="2023-06-15T09:19:52Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" Jun 15 09:19:52 minikube kubelet[1172]: E0615 09:19:52.080395 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2023-06-15T09:19:52Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-5pg9r" podUID=10aa4649-83dd-4b5e-b390-902d937c33f2 Jun 15 09:19:54 minikube kubelet[1172]: E0615 09:19:54.053619 1172 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" Jun 15 09:19:55 minikube kubelet[1172]: I0615 09:19:55.664527 1172 scope.go:115] "RemoveContainer" containerID="1ff014fb8f71dbb86fad35919b82b93d6efabc57b1cf6ad7dd6b0a0dea8c586e" Jun 15 09:19:55 minikube kubelet[1172]: I0615 09:19:55.665504 1172 scope.go:115] "RemoveContainer" containerID="eb0aa1b8e04143faf8f8ca170f215f465f760b015c1e9f50ba70b8d25cba0d70" Jun 15 09:19:55 minikube kubelet[1172]: E0615 09:19:55.666612 1172 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-g56ps_kube-system(44bfe21e-1f2d-42bd-badf-0a0bd6a2cb11)\"" pod="kube-system/kindnet-g56ps" podUID=44bfe21e-1f2d-42bd-badf-0a0bd6a2cb11 Jun 15 09:19:59 minikube kubelet[1172]: E0615 09:19:59.056329 1172 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"