Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Driver crashes unexpectedly with Failed to read /host/proc/mounts requiring pod restart #284

Open
dienhartd opened this issue Nov 4, 2024 · 14 comments
Labels
bug Something isn't working

Comments

@dienhartd
Copy link

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?
Periodically without warning one of my s3 mountpoint driver pods will crash with GRPC errors until I delete it. It will usually cause a dependent pod to fail to start. The replacement immediately after this pod's deletion works fine, but requires manual intervention after noticing dependent pod crashes due to missing pv.

What you expected to happen?
Error not to occur.

How to reproduce it (as minimally and precisely as possible)?
Unclear.

Anything else we need to know?:
Logs

I1104 11:59:40.249998       1 credential.go:95] NodePublishVolume: Using driver identity
I1104 11:59:40.250015       1 node.go:146] NodePublishVolume: mounting d-cluster at /var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount with options [--allow-delete --allow-other --gid=100 --uid=1000]
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on
try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.550806       1 driver.go:136] GRPC error: rpc error: code = Internal desc = Could not mount "d-cluster" at "/var/lib/kubelet/pods/97e71fea-b35
6-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount": Could not check if "/var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount" is a mount point: stat /var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount: no such file or directory, Failed to read /host/proc/mounts after 3 tries: open /host/proc/mounts: invalid argument

Environment

  • Kubernetes version (use kubectl version):
    Client Version: v1.31.1
    Server Version: v1.30.5-eks-ce1d5eb

  • Driver version: v1.9.0
    Installation of s3 mountpoint driver is through eksctl, i.e. eksctl create addon aws-mountpoint-s3-csi-driver

Was directed by @muddyfish to file this issue here: #174 (comment)

@dannycjones
Copy link
Contributor

dannycjones commented Nov 6, 2024

Thanks for opening the bug report, @dienhartd. We'll investigate further.

Would you be able to review dmesg on the host and see if there are any error messages at the time of the issue, and share them if so? In particular, any error messages related to opening of /host/proc/mounts would be of interest.

@dannycjones dannycjones added the bug Something isn't working label Nov 6, 2024
@dannycjones
Copy link
Contributor

Please can you let us know what operating system you're running on the cluster nodes too!

@John-Funcity
Copy link

Please can you let us know what operating system you're running on the cluster nodes too!

i have the same problem, i was runing on amazon linux 2

@dannycjones
Copy link
Contributor

Please can you let us know what operating system you're running on the cluster nodes too!

i have the same problem, i was runing on amazon linux 2

Thanks for sharing, @John-Funcity. Please can you open a new issue so we can get logs relevant to your problem, and also include information such as the dmesg logs as mentioned in #284 (comment).

@John-Funcity
Copy link

Please can you let us know what operating system you're running on the cluster nodes too!

i have the same problem, i was runing on amazon linux 2

Thanks for sharing, @John-Funcity. Please can you open a new issue so we can get logs relevant to your problem, and also include information such as the dmesg logs as mentioned in #284 (comment).

  2.280531] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[    2.291650] systemd[1]: Detected virtualization amazon.
[    2.295150] systemd[1]: Detected architecture x86-64.
[    2.298554] systemd[1]: Running in initial RAM disk.
[    2.302928] systemd[1]: No hostname configured.
[    2.306128] systemd[1]: Set hostname to <localhost>.
[    2.309546] systemd[1]: Initializing machine ID from VM UUID.
[    2.336041] systemd[1]: Reached target Local File Systems.
[    2.340338] systemd[1]: Reached target Swap.
[    2.344257] systemd[1]: Created slice Root Slice.
[    2.497890] XFS (nvme0n1p1): Mounting V5 Filesystem
[    2.666828] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    3.033970] XFS (nvme0n1p1): Ending clean mount
[    3.253141] systemd-journald[863]: Received SIGTERM from PID 1 (systemd).
[    3.309998] printk: systemd: 18 output lines suppressed due to ratelimiting
[    3.537461] SELinux:  Runtime disable is deprecated, use selinux=0 on the kernel cmdline.
[    3.543529] SELinux:  Disabled at runtime.
[    3.610275] audit: type=1404 audit(1732528464.939:2): enforcing=0 old_enforcing=0 auid=4294967

@John-Funcity
Copy link

image

@muddyfish
Copy link
Contributor

Thanks @John-Funcity for the information, but could you please open a new issue so we're able to root cause the issues separately from this one. Please include the dmsg logs and other logs following the logging guide: https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/LOGGING.md

@John-Funcity
Copy link

Maybe this problem?
https://karpenter.sh/v1.0/troubleshooting/
image

@John-Funcity
Copy link

MountVolume.SetUp failed for volume "s3-models-pv" : rpc error: code = Internal desc = Could not mount "xxxx-models-test" at "/var/lib/kubelet/pods/xxxxxxxxx/volumes/kubernetes.iocsi/s3-models-pv/mount": Could not check if "/var/lib/kubelet/pods/xxxxxxxx/volumes/kubernetes.iocsi/s3-models-pv/mount" is a mount point: stat /var/lib/kubelet/pods/xxxxxxxxx/volumes/kubernetes.io~csi/s3-models-pv/mount: no such file or directory, Failed to read /host/proc/mounts after 3 tries: open /host/proc/mounts: invalid argument

@fatihmete
Copy link

Same issue and logs. When I delete the CSI pod running on the node that I get the error from, it is fixed.

@unexge
Copy link
Contributor

unexge commented Dec 4, 2024

Thanks for the reports @John-Funcity @fatihmete. Would you be able to share any log that might be relevant from dmesg? Also, could you try accessing /proc/mounts in the host to see if that works for you?

@fatihmete
Copy link

Thanks for the reports @John-Funcity @fatihmete. Would you be able to share any log that might be relevant from dmesg? Also, could you try accessing /proc/mounts in the host to see if that works for you?

The error does not occur in a specific pattern, and I cannot understand when it will happen. Similarly, I am getting the following error.

MountVolume.SetUp failed for volume "s3-models-pv" : rpc error: code = Internal desc = Could not mount "xxxx-models-test" at "/var/lib/kubelet/pods/xxxxxxxxx/volumes/kubernetes.iocsi/s3-models-pv/mount": Could not check if "/var/lib/kubelet/pods/xxxxxxxx/volumes/kubernetes.iocsi/s3-models-pv/mount" is a mount point: stat /var/lib/kubelet/pods/xxxxxxxxx/volumes/kubernetes.io~csi/s3-models-pv/mount: no such file or directory, Failed to read /host/proc/mounts after 3 tries: open /host/proc/mounts: invalid argument

CSI pods appear to be working without errors. I will add the logs when the problem occurs again.

@geniass
Copy link

geniass commented Dec 5, 2024

@dannycjones would you prefer I opened a new issue as well? Seems I'm getting the exact same issue.

Running k3s v1.30.6+k3s1 on Ubuntu 22.04 (also on Ubuntu 24.04) and s3-mountpoint 1.10.0.
I am also using longhorn for storage in the cluster, could it somehow interfere?

I am able to access /proc/mounts on the host, but I don't see anything in there related to s3 or CSI, what do we expect to find in there relating to the s3 csi driver?

Not much in dmesg (not sure if this is relevant):

Dec 05 03:58:35 raisin kernel: e1000e 0000:00:1f.6 eno1: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 05 08:34:50 raisin kernel: cni0: port 8(vetha3c0b873) entered disabled state
Dec 05 08:34:50 raisin kernel: device vetha3c0b873 left promiscuous mode
Dec 05 08:34:50 raisin kernel: cni0: port 8(vetha3c0b873) entered disabled state
Dec 05 08:35:00 raisin kernel: scsi host2: iSCSI Initiator over TCP/IP
Dec 05 08:35:00 raisin kernel: scsi 2:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
Dec 05 08:35:00 raisin kernel: scsi 2:0:0:0: Attached scsi generic sg2 type 12
Dec 05 08:35:00 raisin kernel: scsi 2:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: Attached scsi generic sg3 type 0
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: Power-on or device reset occurred
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: [sdb] 209715200 512-byte logical blocks: (107 GB/100 GiB)
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: [sdb] Write Protect is off
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: [sdb] Mode Sense: 69 00 10 08
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
Dec 05 08:35:00 raisin kernel: sd 2:0:0:1: [sdb] Attached SCSI disk
Dec 05 08:35:11 raisin kernel: EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 05 08:44:47 raisin kernel: scsi host2: iSCSI Initiator over TCP/IP
Dec 05 08:44:47 raisin kernel: scsi 2:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
Dec 05 08:44:47 raisin kernel: scsi 2:0:0:0: Attached scsi generic sg2 type 12
Dec 05 08:44:47 raisin kernel: scsi 2:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: Attached scsi generic sg3 type 0
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: Power-on or device reset occurred
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: [sdb] 209715200 512-byte logical blocks: (107 GB/100 GiB)
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: [sdb] Write Protect is off
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: [sdb] Mode Sense: 69 00 10 08
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
Dec 05 08:44:47 raisin kernel: sd 2:0:0:1: [sdb] Attached SCSI disk
Dec 05 08:44:55 raisin kernel: EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 05 08:52:47 raisin kernel: scsi host2: iSCSI Initiator over TCP/IP
Dec 05 08:52:47 raisin kernel: scsi 2:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
Dec 05 08:52:47 raisin kernel: scsi 2:0:0:0: Attached scsi generic sg2 type 12
Dec 05 08:52:47 raisin kernel: scsi 2:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: Attached scsi generic sg3 type 0
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: Power-on or device reset occurred
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: [sdb] 209715200 512-byte logical blocks: (107 GB/100 GiB)
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: [sdb] Write Protect is off
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: [sdb] Mode Sense: 69 00 10 08
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
Dec 05 08:52:47 raisin kernel: sd 2:0:0:1: [sdb] Attached SCSI disk
Dec 05 08:52:56 raisin kernel: EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.

@dienhartd
Copy link
Author

dienhartd commented Dec 5, 2024

My team's now moved this data from s3 to EFS. That said when we were using s3 and the s3 mountpoint driver, I'm not positive I had access to dmesg logs since I was using managed rather than unmanaged nodes in EKS

This was usually Amazon Linux 2, though I believe this also happened with the Ubuntu AMI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants