Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CreateContainerError for kube-proxy when using runtime cri-o #16371

Closed
joshiste opened this issue Apr 24, 2023 · 7 comments
Closed

CreateContainerError for kube-proxy when using runtime cri-o #16371

joshiste opened this issue Apr 24, 2023 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@joshiste
Copy link

What Happened?

When doing a minikube start --container-runtime cri-o the kube-proxy pod doesn't start.
Errors:

Name:                 kube-proxy-hpn6m
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 crio/192.168.49.2
Start Time:           Mon, 24 Apr 2023 15:02:16 +0200
Labels:               controller-revision-hash=5cbfdcddd5
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  
    Image:         registry.k8s.io/kube-proxy:v1.26.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       CreateContainerError
    Last State:     Terminated
      Reason:       ContainerStatusUnknown
      Message:      The container could not be located when the pod was deleted.  The container used to be Running
      Exit Code:    137
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 01 Jan 0001 00:00:00 +0000
    Ready:          False
    Restart Count:  1
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ltxq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-7ltxq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  23m                  default-scheduler  Successfully assigned kube-system/kube-proxy-hpn6m to crio
  Normal   Pulled     23m                  kubelet            Container image "registry.k8s.io/kube-proxy:v1.26.3" already present on machine
  Normal   Created    23m                  kubelet            Created container kube-proxy
  Normal   Started    23m                  kubelet            Started container kube-proxy
  Warning  Failed     2m15s                kubelet            Error: container create failed: time="2023-04-24T13:23:16Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m14s                kubelet            Error: container create failed: time="2023-04-24T13:23:17Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m3s                 kubelet            Error: container create failed: time="2023-04-24T13:23:28Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     109s                 kubelet            Error: container create failed: time="2023-04-24T13:23:42Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     96s                  kubelet            Error: container create failed: time="2023-04-24T13:23:55Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     83s                  kubelet            Error: container create failed: time="2023-04-24T13:24:08Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     67s                  kubelet            Error: container create failed: time="2023-04-24T13:24:24Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     54s                  kubelet            Error: container create failed: time="2023-04-24T13:24:37Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     43s                  kubelet            Error: container create failed: time="2023-04-24T13:24:48Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Normal   Pulled     1s (x12 over 2m15s)  kubelet            Container image "registry.k8s.io/kube-proxy:v1.26.3" already present on machine
  Warning  Failed     1s (x3 over 30s)     kubelet            (combined from similar events): Error: container create failed: time="2023-04-24T13:25:30Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"

Attach the log file

log.txt

Operating System

macOS (Default)

Driver

Docker

@mqasimsarfraz
Copy link
Contributor

mqasimsarfraz commented May 22, 2023

Can you please share the docker version you are using? I was hitting the same issue as mentioned here. Upgrading to docker >v23.0.0 solved the issue for me.

docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)

Found it. Upgrading docker version should solve this issue!

@joshiste
Copy link
Author

joshiste commented Jun 15, 2023

Unfortunately updating didn't help.

Name:                 kube-proxy-5pg9r
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 minikube/192.168.49.2
Start Time:           Thu, 15 Jun 2023 11:18:02 +0200
Labels:               controller-revision-hash=5cbfdcddd5
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  
    Image:         registry.k8s.io/kube-proxy:v1.26.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wcwxq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-wcwxq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m43s                 default-scheduler  Successfully assigned kube-system/kube-proxy-5pg9r to minikube
  Warning  Failed     2m42s                 kubelet            Error: container create failed: time="2023-06-15T09:18:03Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m41s                 kubelet            Error: container create failed: time="2023-06-15T09:18:04Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m26s                 kubelet            Error: container create failed: time="2023-06-15T09:18:19Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m11s                 kubelet            Error: container create failed: time="2023-06-15T09:18:34Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     117s                  kubelet            Error: container create failed: time="2023-06-15T09:18:48Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     105s                  kubelet            Error: container create failed: time="2023-06-15T09:19:00Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     89s                   kubelet            Error: container create failed: time="2023-06-15T09:19:16Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     75s                   kubelet            Error: container create failed: time="2023-06-15T09:19:30Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     64s                   kubelet            Error: container create failed: time="2023-06-15T09:19:41Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     24s (x3 over 53s)     kubelet            (combined from similar events): Error: container create failed: time="2023-06-15T09:20:21Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Normal   Pulled     13s (x13 over 2m42s)  kubelet            Container image "registry.k8s.io/kube-proxy:v1.26.3" already present on machine

Attach the log file

log.txt

@mqasimsarfraz
Copy link
Contributor

It seems we need more work to get it fixed for macOS compared to Linux. I have opened an issue in docker for mac related to this. Also, it seems duplicate of #13742 so probably please close this one and we can track it via #13742 ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants