-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fsGroup securityContext does not apply to nfs mount #260
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Why did this get closed with no resolution? I have this same issue. If there is a better solution than an init container please someone fill me in. |
Yeah... I'm having the same issue with NFS too. |
I'm having the same problem. |
same issue able to write but not able to read from nfs mounted volume . kubernetes shows success in mounting process but no luck . |
/reopen |
@varun-da: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@kmarokas: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
thanks @kmarokas! |
/remove-lifecycle rotten |
Would love for this to be addressed! In the mean time here's how we're dealing with it... In this example there are two pods that are mounting an AWS EFS volume via nfs. To enable a non-root user, we make the mount point accessible via an initContainer. ---
apiVersion: v1
kind: Pod
metadata:
name: alpine-efs-1
labels:
name: alpine
spec:
volumes:
- name: nfs-test
nfs:
server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
path: /
securityContext:
fsGroup: 100
runAsGroup: 100
runAsUser: 405
initContainers:
- name: nfs-fixer
image: alpine
securityContext:
runAsUser: 0
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- sh
- -c
- (chmod 0775 /nfs; chgrp 100 /nfs)
containers:
- name: alpine
image: alpine
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- tail
- -f
- /dev/null
---
apiVersion: v1
kind: Pod
metadata:
name: alpine-efs-2
labels:
name: alpine
spec:
volumes:
- name: nfs-test
nfs:
server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
path: /
securityContext:
supplementalGroups:
- 100
fsGroup: 100
# runAsGroup: 100
runAsUser: 405
initContainers:
- name: nfs-fixer
image: alpine
securityContext:
runAsUser: 0
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- sh
- -c
- (chmod 0775 /nfs; chgrp 100 /nfs)
containers:
- name: alpine
image: alpine
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- tail
- -f
- /dev/null |
The same seems to be true for cifs mounts created through a custom volume driver: juliohm1978/kubernetes-cifs-volumedriver#8 Edit: Looks like there is very little magic that Kubernetes does when mounting the volumes. The individual volume drivers have to respect the Is https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client the place where this could be fixed? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
I ran into this exact issue with a statical PV using the default mount nfs. |
@yingding have you found any workaround? |
@radirobi97 If you can use initContiners approach #260 (comment) , it will work. |
found kubernetes/examples#260 (comment) from which it seems it is known at least since 2018 that fsGroup doesn't affect PVCs on NFS and CIFS either.
This reverts commit 5bf9d4e. needs init container first due to kubernetes/examples/issues/260
This reverts commit 5bf9d4e. needs init container first due to kubernetes/examples/issues/260
This reverts commit 5bf9d4e. needs init container first due to kubernetes/examples/issues/260
`fsGroupChangePolicy: "OnRootMismatch"`does not work for NFS mounts (also see kubernetes/examples/issues/260)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/remove-lifecycle rotten |
/reopen |
@rmunn: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The example https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs works fine if the container using nfs mount is running as root user. If I use securityContext to run not as root user then I have no write access to the mounted volume.
How to reproduce:
here is the nfs-busybox-rc.yaml with securityContext:
Actual result:
Expected result:
the group ownership of /mnt folder should be user 10000
The mount options in nfs pv are not allowed except rw
The text was updated successfully, but these errors were encountered: