Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubearmor restarts all pods in a Kubernetes cluster after installation. #1935

Open
thungrac opened this issue Jan 7, 2025 · 3 comments · May be fixed by #1952
Open

Kubearmor restarts all pods in a Kubernetes cluster after installation. #1935

thungrac opened this issue Jan 7, 2025 · 3 comments · May be fixed by #1952
Assignees
Labels
bug Something isn't working

Comments

@thungrac
Copy link

thungrac commented Jan 7, 2025

Bug Report

General Information

To Reproduce

  1. install kubearmor using default helm chart
  2. All pods without the annotation container.apparmor.security.beta.kubernetes.io will be restarted.

Expected behavior

All pods continue running as usual.

addition info

kubearmor-controller log is:

2025-01-07T10:22:16Z INFO setup Starting node watcher
2025-01-07T10:22:16Z INFO setup Starting pod watcher
2025-01-07T10:22:16Z INFO setup Adding mutation webhook
2025-01-07T10:22:16Z INFO informer.NodeWatcher Starting node watcher
2025-01-07T10:22:16Z INFO informer.PodWatcher Starting pod watcher
2025-01-07T10:22:16Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-pods"}
2025-01-07T10:22:16Z INFO setup Adding pod refresher controller
2025-01-07T10:22:16Z INFO setup starting manager
2025-01-07T10:22:16Z INFO controller-runtime.metrics Starting metrics server
2025-01-07T10:22:16Z INFO setup disabling http/2
2025-01-07T10:22:16Z INFO starting server {"name": "health probe", "addr": "[::]:8081"}
W0107 10:22:16.460350 1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
W0107 10:22:16.460452 1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: failed to list *v1.Pod: Get "https://10.233.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
2025-01-07T10:22:16Z INFO controller-runtime.webhook Starting webhook server
2025-01-07T10:22:16Z INFO setup disabling http/2
E0107 10:22:16.460474 1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.233.0.1:443: connect: connection refused" logger="UnhandledError"
E0107 10:22:16.460511 1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://10.233.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.233.0.1:443: connect: connection refused" logger="UnhandledError"
I0107 10:22:16.460545 1 leaderelection.go:254] attempting to acquire leader lease is-chart/191ee55f.kubearmor.com...
2025-01-07T10:22:16Z INFO controller-runtime.certwatcher Updated current TLS certificate
2025-01-07T10:22:16Z INFO controller-runtime.certwatcher Starting certificate watcher
2025-01-07T10:22:16Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443}
E0107 10:22:16.461057 1 leaderelection.go:436] error retrieving resource lock is-chart/191ee55f.kubearmor.com: Get "https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/is-chart/leases/191ee55f.kubearmor.com?timeout=5s": dial tcp 10.233.0.1:443: connect: connection refused
2025-01-07T10:22:16Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:8080", "secure": true}
W0107 10:22:17.534887 1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
E0107 10:22:17.535046 1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.233.0.1:443: connect: connection refused" logger="UnhandledError"
W0107 10:22:17.578138 1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: failed to list *v1.Pod: Get "https://10.233.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
E0107 10:22:17.578213 1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.0/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://10.233.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.233.0.1:443: connect: connection refused" logger="UnhandledError"
2025-01-07T10:22:19Z INFO informer.NodeWatcher New node was added, name=k8-node-1 enforcer=apparmor
2025-01-07T10:22:19Z INFO informer.NodeWatcher Cluster in a homogeneus state with apparmor enforcer
2025-01-07T10:22:19Z INFO informer.NodeWatcher New node was added, name=k8-node-2 enforcer=apparmor
2025-01-07T10:22:19Z INFO informer.NodeWatcher Cluster in a homogeneus state with apparmor enforcer
2025-01-07T10:22:19Z INFO informer.NodeWatcher New node was added, name=k8-node-3 enforcer=apparmor
2025-01-07T10:22:19Z INFO informer.NodeWatcher Cluster in a homogeneus state with apparmor enforcer
2025-01-07T10:22:19Z INFO informer.NodeWatcher New node was added, name=k8-node-4 enforcer=apparmor
2025-01-07T10:22:19Z INFO informer.NodeWatcher Cluster in a homogeneus state with apparmor enforcer
2025-01-07T10:22:31Z INFO informer.NodeWatcher Cluster in a homogeneus state with apparmor enforcer
I0107 10:22:36.754767 1 leaderelection.go:268] successfully acquired lease kubearmor-chart/191ee55f.kubearmor.com
2025-01-07T10:22:36Z INFO Starting EventSource {"controller": "kubearmorpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorPolicy", "source": "kind source: *v1.KubeArmorPolicy"}
2025-01-07T10:22:36Z INFO Starting Controller {"controller": "kubearmorpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorPolicy"}
2025-01-07T10:22:36Z DEBUG events kubearmor-controller-78b5859c9f-szljl_13b0bab1-7323-435d-a982-2b8394e1618f became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"kubearmor-chart","name":"191ee55f.kubearmor.com","uid":"3553c87f-c707-46c5-a4a1-01a5d716720a","apiVersion":"coordination.k8s.io/v1","resourceVersion":"435331051"}, "reason": "LeaderElection"}
2025-01-07T10:22:36Z INFO Starting EventSource {"controller": "pod", "controllerGroup": "", "controllerKind": "Pod", "source": "kind source: *v1.Pod"}
2025-01-07T10:22:36Z INFO Starting EventSource {"controller": "kubearmorhostpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorHostPolicy", "source": "kind source: *v1.KubeArmorHostPolicy"}
2025-01-07T10:22:36Z INFO Starting Controller {"controller": "kubearmorhostpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorHostPolicy"}
2025-01-07T10:22:36Z INFO Starting Controller {"controller": "pod", "controllerGroup": "", "controllerKind": "Pod"}
2025-01-07T10:22:36Z INFO Starting workers {"controller": "kubearmorhostpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorHostPolicy", "worker count": 1}
2025-01-07T10:22:36Z INFO Starting workers {"controller": "kubearmorpolicy", "controllerGroup": "security.kubearmor.com", "controllerKind": "KubeArmorPolicy", "worker count": 1}
2025-01-07T10:22:36Z INFO Starting workers {"controller": "pod", "controllerGroup": "", "controllerKind": "Pod", "worker count": 1}
@thungrac thungrac added the bug Something isn't working label Jan 7, 2025
@thungrac
Copy link
Author

thungrac commented Jan 8, 2025

I found that the issue is as follows:

  1. KubeArmor attempts to add the annotation container.apparmor.security.beta.kubernetes.io to running pods.
  2. Kubernetes does not permit adding this annotation, resulting in the error message:
    metadata.annotations[container.apparmor.security.beta.kubernetes.io/nginx]: Forbidden: may not add AppArmor annotations.
  3. KubeArmor deletes the pod and recreates it with the annotation already applied.

@Aryan-sharma11
Copy link
Member

I found that the issue is as follows:

  1. KubeArmor attempts to add the annotation container.apparmor.security.beta.kubernetes.io to running pods.
  2. Kubernetes does not permit adding this annotation, resulting in the error message:
    metadata.annotations[container.apparmor.security.beta.kubernetes.io/nginx]: Forbidden: may not add AppArmor annotations.
  3. KubeArmor deletes the pod and recreates it with the annotation already applied.

@thungrac Correct, AppArmor annotations are security annotations, and Kubernetes treats them as immutable. Once a pod is created, these annotations cannot be modified or patched. To apply or modify an AppArmor profile, the pod must be restarted or recreated with the required annotation. This is why restarting the pod is necessary for the AppArmor enforcer to function properly in KubeArmor.

@thungrac
Copy link
Author

thungrac commented Jan 9, 2025

I found that the issue is as follows:

  1. KubeArmor attempts to add the annotation container.apparmor.security.beta.kubernetes.io to running pods.
  2. Kubernetes does not permit adding this annotation, resulting in the error message:
    metadata.annotations[container.apparmor.security.beta.kubernetes.io/nginx]: Forbidden: may not add AppArmor annotations.
  3. KubeArmor deletes the pod and recreates it with the annotation already applied.

@thungrac Correct, AppArmor annotations are security annotations, and Kubernetes treats them as immutable. Once a pod is created, these annotations cannot be modified or patched. To apply or modify an AppArmor profile, the pod must be restarted or recreated with the required annotation. This is why restarting the pod is necessary for the AppArmor enforcer to function properly in KubeArmor.

The side effects are as follows:

  1. All pods in the Kubernetes cluster are recreated instantly, causing a high workload that may lead to the corruption of the entire cluster.

  2. Since pods are deleted and recreated, the associated services experience downtime and cannot function properly.

Suggested improvements:

  1. Implement a rollout restart sequence for workloads (e.g., deployments) with a delay to reduce impact.

  2. Allow manual rollout restarts for DaemonSets and StatefulSets.

@Aryan-sharma11 Aryan-sharma11 self-assigned this Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants