-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nfs rwx folder has 000 as permission #124
Comments
Hi @mbu147, I followed the same steps as mentioned in the description and observed that permissions of root@nfs-pvc-e155220f-63b7-4882-9104-98575910d9c9-69c97df57d-wrxq2:/ # ls -la
total 88
drwxr-xr-x 1 root root 4096 Nov 2 06:42 .
drwxr-xr-x 1 root root 4096 Nov 2 06:42 ..
drwxr-xr-x 3 root root 4096 Nov 2 06:42 nfsshare
...
...
... Steps followed to provision NFS volume:
helm install openebs openebs/openebs -n openebs --create-namespace --set legacy.enabled=false --set jiva.enabled=true --set ndm.enabled=false --set ndmOperator.enabled=false --set localProvisioner.enabled=true --set nfs-provisioner.enabled=true --set nfs-provisioner.nfsStorageClass.backendStorageClass=openebs-jiva-csi-default
StorageClass outputs: kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 140m
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 140m
openebs-jiva-csi-default jiva.csi.openebs.io Delete Immediate true 140m
openebs-kernel-nfs openebs.io/nfsrwx Delete Immediate false 140m
Did I miss anything? Not sure how are you getting One more observation:
Can you help with the following outputs(maybe it will help to understand further):
|
Hi @mittachaitu, It looks like I'm doing the same, except not using the "global" helm chart. I switch to the same chart as you and will have a look. I noticed that it apparently only occurs when each node is under a high IO load, so that he needs to "reconnect" the mount points.
In a fresh new pvc the folder has Thanks! |
Hmm..., the system might be going into an RO state, if jiva volume is turning into RO then
Yeah, currently nfs-provisioner allows to only to set fsGID but there is an issue to support configuring UID(which is being worked upon), so that when the volume is provisioned user no need to run |
okay i understand.. no one had these issue beside me? You asked for more information, i forgot to add this to my last post:
kubectl get deploy nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6 -n openebs -o yaml
|
Describe the bug:
I created many RWX nfs shares with nfs-provisioner. As backend storageclass i use openebs-jiva.
Sometimes every nfs share mount get the permission 000
Then, the mount folder looks like
in the nfs-pvc pod.
Cause of that, the nginx container where the nfs-pvc is mounted also have 000 on the folder and cannot read the files within
The files in the mount folder have the correct permissions.
Expected behaviour:
Default mount permission 755 or something similar
Steps to reproduce the bug:
Just create a new nfs rwx pvc and wait. After some time the running nginx container cannot read the folder anymore.
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
https://pastebin.com/wxBrUzGy
kubectl get pvc -n <openebs_namespace>
https://pastebin.com/H4MuJs2j
kubectl get pvc -n <application_namespace>
https://pastebin.com/fnZinyN3
Anything else we need to know?:
jiva and nfs installed via helm charts
https://github.com/openebs/jiva-operator/tree/develop/deploy/helm/charts
https://github.com/openebs/dynamic-nfs-provisioner/tree/develop/deploy/helm/charts
helm config:
storage class:
Environment details:
kubectl get po -n openebs --show-labels
):kubectl version
):v1.21.5+k3s
contabo vps hardware
cat /etc/os-release
):AlmaLinux 8.4 (Electric Cheetah)
uname -a
):4.18.0-305.19.1.el8_4.x86_64
Do i have a misconfigured setup or is this a bug?
Thanks for help!
The text was updated successfully, but these errors were encountered: