-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Unable to mount to EFS file system #139
Comments
It seems the issue is not specific to mounting the EFS volume since you can mount it fine SSHing it from the node and the token 'efs-provisioner-token-ioyyf' could not mount either. Can you provide kubelet logs? You can email me if don't want to put them publicly, mawong@redhat.com |
How do I get kubelet logs from openshift? journalctl doesnt return nothing. |
It should be something like 'journalctl -u atomic-openshift-node' |
I have sent email to you as there is quite a lot of logs. |
UPDATE Actually waiting for some time (while preparing logs, writing issue etc.) I tried to delete deployment and run it again with all the same settings and it worked, container spun up and is accessible. Is this behavior expected? |
No it's not expected but at this point I'd consider it an openshift 3.4 bug. I couldn't find in the logs even an attempt by kubelet to mount either pv-volume or efs-provisioner-token-ioyyf. I would open a bug with openshift if you encounter this again, with node logs. |
Hi wongma7, I have managed to get logs form master-controller container ->
I then can see this output from logs and efs-provisioner still cant start
|
The errors before patching serviceaccount are expected. However, I don't know about the periodic "error syncing deployment" "object modified" errors. In my experience they can be normal too, kubernetes should retry and eventually succeed. They basically mean that in the time between it starting its attempt to update the deployment and actually submitting its update request, something else updated it (e.g. a user; like the serviceaccount patching maybe). My best guess is that the "errors syncing deployment" aren't the source of the problem, they are probably just transient errors, because in the end the deployment's Pod still got created. Just for some other reason the pod's volumes arent getting mounted |
I'll close this because I think it's an openshift bug; please if you see this again open this on their end with the same logs and such, it's very strange that neither volume is mounted. |
I'm experiencing this issue now as well. CentOS 7, Docker 1.18 and Kubernetes v1.10. We're using devicemapper as storage driver for docker. Pod logs read:
Pod doesn't seem to be coming up properly. |
@nachimehta we need #422 since you have mounted via this IP 10.227.205.73:/:/persistentvolumes |
Fix small lint issues
Hello.
Currently I am trying to create persistent volumes on OpenShift Container Platform 3.4 using efs-provisioner POD.
The thing is, at very first time I was successfully able to deploy POD, create service account, clusterrole and add policies to service account. After running oc patch command, the POD spinned up immediately.
When trying to replicate all the steps on different EFS and new project on OCP, I am getting this error when POD is trying to spin up after oc patch command
Failed mount Unable to mount volumes for pod "efs-provisioner-2637432370-rc4mm_kristapstesting(f26eac51-4154-11e7-839c-026eb0b9aab0)": timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last 5 minutes
Failed sync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last
I have tried several times with no luck. I am sure I am using the right directory form EFS in deployment.yaml as well as EFS DNS name. SSHing in to application node on and manually mounting to EFS, it mounts just fine.
I have run out of ideas at this moment.
Can you please help?
Regards,
Kristaps
The text was updated successfully, but these errors were encountered: