Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

Unable to mount to EFS file system #139

Closed
kristapsm opened this issue May 25, 2017 · 11 comments
Closed

Unable to mount to EFS file system #139

kristapsm opened this issue May 25, 2017 · 11 comments

Comments

@kristapsm
Copy link

kristapsm commented May 25, 2017

Hello.
Currently I am trying to create persistent volumes on OpenShift Container Platform 3.4 using efs-provisioner POD.
The thing is, at very first time I was successfully able to deploy POD, create service account, clusterrole and add policies to service account. After running oc patch command, the POD spinned up immediately.

When trying to replicate all the steps on different EFS and new project on OCP, I am getting this error when POD is trying to spin up after oc patch command

Failed mount Unable to mount volumes for pod "efs-provisioner-2637432370-rc4mm_kristapstesting(f26eac51-4154-11e7-839c-026eb0b9aab0)": timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last 5 minutes

Failed sync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last

I have tried several times with no luck. I am sure I am using the right directory form EFS in deployment.yaml as well as EFS DNS name. SSHing in to application node on and manually mounting to EFS, it mounts just fine.
I have run out of ideas at this moment.
Can you please help?

Regards,
Kristaps

@wongma7
Copy link
Contributor

wongma7 commented May 25, 2017

It seems the issue is not specific to mounting the EFS volume since you can mount it fine SSHing it from the node and the token 'efs-provisioner-token-ioyyf' could not mount either. Can you provide kubelet logs? You can email me if don't want to put them publicly, mawong@redhat.com

@kristapsm
Copy link
Author

How do I get kubelet logs from openshift? journalctl doesnt return nothing.

@wongma7
Copy link
Contributor

wongma7 commented May 25, 2017

It should be something like

'journalctl -u atomic-openshift-node'

@kristapsm
Copy link
Author

I have sent email to you as there is quite a lot of logs.
Thanks again for helping out! I really appreciate that!

@kristapsm
Copy link
Author

UPDATE

Actually waiting for some time (while preparing logs, writing issue etc.) I tried to delete deployment and run it again with all the same settings and it worked, container spun up and is accessible. Is this behavior expected?

@wongma7
Copy link
Contributor

wongma7 commented May 25, 2017

No it's not expected but at this point I'd consider it an openshift 3.4 bug. I couldn't find in the logs even an attempt by kubelet to mount either pv-volume or efs-provisioner-token-ioyyf. I would open a bug with openshift if you encounter this again, with node logs.

@kristapsm
Copy link
Author

Hi wongma7,

I have managed to get logs form master-controller container ->

I0529 11:00:17.477237 1 replica_set.go:482] Too few "kristaps2"/"efs-provisioner-2771124237" replicas, need 1, creating 1 I0529 11:00:17.478329 1 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kristaps2",Name:"efs-provisioner", UID:"ffebfa83-445d-11e7-8c11-06e41192c1d2", APIVersion:"extensions", ResourceVersion:"860896", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set efs-provisioner-2771124237 to 1 I0529 11:00:17.492861 1 replica_set.go:503] Failed creation, decrementing expectations for replica set "kristaps2"/"efs-provisioner-2771124237" E0529 11:00:17.492885 1 replica_set.go:505] unable to create pods: pods "efs-provisioner-2771124237-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used] I0529 11:00:17.493304 1 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kristaps2",Name:"efs-provisioner-2771124237", UID:"fff12756-445d-11e7-839c-026eb0b9aab0", APIVersion:"extensions", ResourceVersion:"860897", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "efs-provisioner-2771124237-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used] I0529 11:00:17.511787 1 replica_set.go:482] Too few "kristaps2"/"efs-provisioner-2771124237" replicas, need 1, creating 1 I0529 11:00:17.513401 1 deployment_controller.go:465] Error syncing deployment kristaps2/efs-provisioner:Operation cannot be fulfilled on deployments.extensions "efs-provisioner": the object has been modified; pleaseapply your changes to the latest version and try again I0529 11:00:17.522471 1 replica_set.go:503] Failed creation, decrementing expectations for replica set "kristaps2"/"efs-provisioner-2771124237" E0529 11:00:17.522493 1 replica_set.go:505] unable to create pods: pods "efs-provisioner-2771124237-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used] I0529 11:00:17.522881 1 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kristaps2",Name:"efs-provisioner-2771124237", UID:"fff12756-445d-11e7-839c-026eb0b9aab0", APIVersion:"extensions", ResourceVersion:"860901", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "efs-provisioner-2771124237-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]
This is before patching deployment with service-account, once I run
oc patch deployment efs-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"efs-provisioner"}}}}'

I then can see this output from logs and efs-provisioner still cant start

I0529 11:00:39.926626 1 replica_set.go:482] Too few "kristaps2"/"efs-provisioner-4191553080" replicas, need 1, creating 1 I0529 11:00:39.926881 1 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kristaps2",Name:"efs-provisioner", UID:"ffebfa83-445d-11e7-8c11-06e41192c1d2", APIVersion:"extensions", ResourceVersion:"860929", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set efs-provisioner-4191553080 to 1 I0529 11:00:39.964192 1 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kristaps2",Name:"efs-provisioner-4191553080", UID:"0d52a024-445e-11e7-839c-026eb0b9aab0", APIVersion:"extensions", ResourceVersion:"860930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: efs-provisioner-4191553080-te92s I0529 11:00:39.978200 1 factory.go:211] Replication controller "kristaps2/" has been deleted I0529 11:00:39.984998 1 deployment_controller.go:465] Error syncing deployment kristaps2/efs-provisioner:Operation cannot be fulfilled on replicasets "efs-provisioner-4191553080": the object has been modified; pleaseapply your changes to the latest version and try again I0529 11:00:40.019259 1 factory.go:211] Replication controller "kristaps2/" has been deleted I0529 11:00:40.061933 1 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kristaps2",Name:"efs-provisioner", UID:"ffebfa83-445d-11e7-8c11-06e41192c1d2", APIVersion:"extensions", ResourceVersion:"860932", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set efs-provisioner-2771124237 to 0 I0529 11:00:40.084073 1 deployment_controller.go:465] Error syncing deployment kristaps2/efs-provisioner:Operation cannot be fulfilled on deployments.extensions "efs-provisioner": the object has been modified; pleaseapply your changes to the latest version and try again I0529 11:00:40.128793 1 deployment_controller.go:465] Error syncing deployment kristaps2/efs-provisioner:Operation cannot be fulfilled on deployments.extensions "efs-provisioner": the object has been modified; pleaseapply your changes to the latest version and try again
Any idea why its so? Why does it say something about object being modified?

@wongma7
Copy link
Contributor

wongma7 commented May 29, 2017

The errors before patching serviceaccount are expected. However, I don't know about the periodic "error syncing deployment" "object modified" errors. In my experience they can be normal too, kubernetes should retry and eventually succeed. They basically mean that in the time between it starting its attempt to update the deployment and actually submitting its update request, something else updated it (e.g. a user; like the serviceaccount patching maybe). My best guess is that the "errors syncing deployment" aren't the source of the problem, they are probably just transient errors, because in the end the deployment's Pod still got created. Just for some other reason the pod's volumes arent getting mounted

@wongma7
Copy link
Contributor

wongma7 commented Jun 5, 2017

I'll close this because I think it's an openshift bug; please if you see this again open this on their end with the same logs and such, it's very strange that neither volume is mounted.

@wongma7 wongma7 closed this as completed Jun 5, 2017
@nachimehta
Copy link

I'm experiencing this issue now as well. CentOS 7, Docker 1.18 and Kubernetes v1.10. We're using devicemapper as storage driver for docker.

Pod logs read:

F0410 21:00:22.901315 1 efs-provisioner.go:71] no mount entry found for fs-cc1ad184.efs.us-east-1.amazonaws.com among entries /dev/mapper/docker-202:1-92359617-ae0e3e779155bdd4fd3bdca26c0296dc5c8f8e7649fdaeaee6cda73f5acbae1e:/, proc:/proc, tmpfs:/dev, devpts:/dev/pts, sysfs:/sys, tmpfs:/sys/fs/cgroup, cgroup:/sys/fs/cgroup/systemd, cgroup:/sys/fs/cgroup/freezer, cgroup:/sys/fs/cgroup/cpu,cpuacct, cgroup:/sys/fs/cgroup/net_cls,net_prio, cgroup:/sys/fs/cgroup/cpuset, cgroup:/sys/fs/cgroup/perf_event, cgroup:/sys/fs/cgroup/blkio, cgroup:/sys/fs/cgroup/hugetlb, cgroup:/sys/fs/cgroup/devices, cgroup:/sys/fs/cgroup/memory, cgroup:/sys/fs/cgroup/pids, mqueue:/dev/mqueue, 10.227.205.73:/:/persistentvolumes, /dev/xvda1:/dev/termination-log, /dev/xvda1:/etc/resolv.conf, /dev/xvda1:/etc/hostname, /dev/xvda1:/etc/hosts, shm:/dev/shm, tmpfs:/var/run/secrets/kubernetes.io/serviceaccount, proc:/proc/bus, proc:/proc/fs, proc:/proc/irq, proc:/proc/sys, proc:/proc/sysrq-trigger, tmpfs:/proc/kcore, tmpfs:/proc/keys, tmpfs:/proc/timer_list, tmpfs:/proc/timer_stats, tmpfs:/proc/sched_debug, tmpfs:/proc/scsi, tmpfs:/sys/firmware,

Pod doesn't seem to be coming up properly.

@wongma7
Copy link
Contributor

wongma7 commented Apr 11, 2018

@nachimehta we need #422 since you have mounted via this IP 10.227.205.73:/:/persistentvolumes

yangruiray pushed a commit to yangruiray/external-storage that referenced this issue Jul 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants