Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cert-manager pods fail to start due to violating PodSecurity on a "hardened" cluster #9349

Closed
iAlex97 opened this issue Sep 29, 2022 · 1 comment · Fixed by #9404
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@iAlex97
Copy link

iAlex97 commented Sep 29, 2022

Environment:

  • Cloud provider or hardware configuration:
    Hetzner cloud, deployed with "Kubernetes Cloud Controller Manager"
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=cpx31
                    beta.kubernetes.io/os=linux
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
Linux 5.15.0-46-generic x86_64
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
  • Version of Ansible (ansible --version):
ansible [core 2.12.5]
  config file = /root/kubespray/ansible.cfg
  configured module search path = ['/root/kubespray/library']
  ansible python module location = /root/kubespray-venv/lib/python3.10/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/kubespray-venv/bin/ansible
  python version = 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
  jinja version = 2.11.3
  libyaml = True
  • Version of Python (python --version):
Python 3.10.6

Kubespray version (commit) (git rev-parse --short HEAD):

18efdc2c

Network plugin used:

Calico with eBPF backend

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):
gist

Command used to invoke ansible:

ansible-playbook -i inventory/sdf-ws-220/hosts.yaml -e "@inventory/sdf-ws-220/hardening.yaml" cluster.yml

Output of ansible run:
Lost it, but everything seemed to be deployed normally. I can redeploy the cluster to save the output if needed.

Anything else do we need to know:
The issue seems to be related to the securityContext of the cert-manager pods:

Error creating: pods "cert-manager-8d45cdf46-scd7h" is forbidden: violates PodSecurity "restricted:latest": unrestricted capabilities (container "cert-manager" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "cert-manager" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Updating the deployments' security context according to this, allows the pods to start normally:

          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault

Result:

kubectl -n cert-manager get po
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7d457bb758-srw2k              1/1     Running   0          86m
cert-manager-cainjector-79bfdbf497-2l6f9   1/1     Running   0          85m
cert-manager-webhook-5fb958587d-x9xc2      1/1     Running   0          84m
@iAlex97 iAlex97 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 29, 2022
@oomichi
Copy link
Contributor

oomichi commented Oct 5, 2022

This issue seems comming from the combination of

cert_manager_enabled: true
kubelet_seccomp_default: true

Let me try reproducing the issue with the above pr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants