-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix seccomp error on helmchart deployment #6686
Conversation
Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
|
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @janlauber! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: janlauber The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is not a correct way to fix this. Most probably this is an issue with your Kubernetes cluster. You are using rancher and K8S v1.22 (which is not yet officialy supported by the Dashboard FYI). I think that seccomp is simply enabled inside your cluster and that's why you are unable to deploy the application with custom /close |
@floreks: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey @floreks So there could be a helm value to disable the securityContexts in the deployment manifest? |
Helm already provides a way to provide a custom |
@floreks securityContext: {} or securityContext:
seccompProfile:
type: null Now I've implemented a thanks and greez |
Simply, remove the |
@floreks apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: kubernetes-dashboard-local
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-01-12T16:09:34Z"
generation: 1
labels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard-local
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.4.0
helm.sh/chart: kubernetes-dashboard-5.1.0
name: kubernetes-dashboard-local
namespace: default
resourceVersion: "1332552"
uid: df7773bc-98a2-49ed-89ce-4dc2de9e4350
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard-local
app.kubernetes.io/name: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard-local
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.4.0
helm.sh/chart: kubernetes-dashboard-5.1.0
spec:
containers:
- args:
- --namespace=default
- --auto-generate-certificates
- --metrics-provider=none
image: kubernetesui/dashboard:v2.4.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 8443
name: https
protocol: TCP
resources:
limits:
cpu: "2"
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: # here is the line who shouldn't get included, when not adding the securityContext value to the custom values.yaml
seccompProfile:
type: RuntimeDefault
serviceAccount: kubernetes-dashboard-local
serviceAccountName: kubernetes-dashboard-local
terminationGracePeriodSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
defaultMode: 420
secretName: kubernetes-dashboard-local-certs
- emptyDir: {}
name: tmp-volume
status:
conditions:
- lastTransitionTime: "2022-01-12T16:09:34Z"
lastUpdateTime: "2022-01-12T16:09:34Z"
message: Created new replica set "kubernetes-dashboard-local-699c8b95fc"
reason: NewReplicaSetCreated
status: "True"
type: Progressing
- lastTransitionTime: "2022-01-12T16:09:34Z"
lastUpdateTime: "2022-01-12T16:09:34Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-01-12T16:09:34Z"
lastUpdateTime: "2022-01-12T16:09:34Z"
message: 'pods "kubernetes-dashboard-local-699c8b95fc-" is forbidden: PodSecurityPolicy:
unable to admit pod: [pod.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]:
Forbidden: seccomp may not be set pod.metadata.annotations[container.seccomp.security.alpha.kubernetes.io/kubernetes-dashboard]:
Forbidden: seccomp may not be set]'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 1
unavailableReplicas: 1 greez |
Looks like it might be automatically set by the seccomp profile enabled inside your cluster. I don't remember all settings we are using but I am 90% sure now that it does not come from our config. |
@floreks securityContext: null So it is indeed your config of rendering this variables.
also the following line must contain the whole values path of rendering:
The changes are faced in a new PR. greez |
This PR faces the following issue:
It removes the securityContext templating implementation in the
deployment.yaml
at a place where it shouldn't belong.Please checkout the issue and let me know if this PR works for you. I tested it locally and it works now and the replicaset can be created.
Signed-off-by: Jan Lauber jan.lauber@protonmail.ch