You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have recently deployed and used the EBS driver on EKS for dynamic provisioning of MongoDB pods deployed with Bitnami's Helm charts (MongoDB and MongoDB sharded).
When persistence is desired, my deployment originally creates a storage class with parameter mountOption set to ["tls"].
This used to work fine for EFS provisioning with the EFS driver but when the storage class is created with provisioner ebs.csi.aws.com instead of efs.csi.aws.com, my MongoDB pods are blocked in Init:0/2 status and I constantly get the following events when describing my MongoDB pods (deployed with the "unsharded" Helm chart) with kubectl :
Normal SuccessfulAttachVolume 2m47s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-2f1ea585-14a6-4100-9d80-6837d88edfd0"
Warning FailedMount 46s kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[scripts empty-dir certs-volume certs common-scripts datadir custom-init-scripts]: timed out waiting for the condition
Warning FailedMount 37s (x9 over 2m46s) kubelet MountVolume.MountDevice failed for volume "pvc-2f1ea585-14a6-4100-9d80-6837d88edfd0" : rpc error: code = Internal desc = could not format "/dev/nvme1n1" and mount it at "/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/48817a63b6a7edd1dc92c483a9331ace841a3466883c4b0e30dacdd9e6062c7e/globalmount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o tls,defaults /dev/nvme1n1 /var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/48817a63b6a7edd1dc92c483a9331ace841a3466883c4b0e30dacdd9e6062c7e/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/48817a63b6a7edd1dc92c483a9331ace841a3466883c4b0e30dacdd9e6062c7e/globalmount: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
I filled this ticket using the "Blank issue" template because I don't know if it's really a bug, or on purpose or due to technical limitations.
Thanks for your answer !
The text was updated successfully, but these errors were encountered:
The tls mount option is exclusive to EFS CSI Driver (via efs-utils), which is why it is not working with EBS CSI driver. You will need to remove that tls mountOption for the volume to successfully mount.
For posterity, could you also send the manifests of the PVC, PV, and SC for the volume that is failing to mount?
This tls mount option seems to be unique to EFS. Looking at EFS documentation:
"Enabling encryption of data in transit for your Amazon EFS file system is done by enabling Transport Layer Security (TLS) when you mount your file system using the Amazon EFS mount helper."
While I'm no expert in that MongoDB helm chart, to encrypt your data-in-transit for EBS volumes you can add encrypted: true parameter to your EBS CSI Driver Storage Class. See Amazon EBS encryption documentation for more details.
Hello, I have recently deployed and used the EBS driver on EKS for dynamic provisioning of MongoDB pods deployed with Bitnami's Helm charts (MongoDB and MongoDB sharded).
When persistence is desired, my deployment originally creates a storage class with parameter
mountOption
set to["tls"]
.This used to work fine for EFS provisioning with the EFS driver but when the storage class is created with provisioner
ebs.csi.aws.com
instead ofefs.csi.aws.com
, my MongoDB pods are blocked in Init:0/2 status and I constantly get the following events when describing my MongoDB pods (deployed with the "unsharded" Helm chart) with kubectl :I filled this ticket using the "Blank issue" template because I don't know if it's really a bug, or on purpose or due to technical limitations.
Thanks for your answer !
The text was updated successfully, but these errors were encountered: