Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Systemd warnings due to excessively long mount path names generated by RBD CSI driver #5183

Open
quantumcnsai opened this issue Feb 27, 2025 · 1 comment

Comments

@quantumcnsai
Copy link

When using the RBD CSI driver, Kubernetes/CSI generates mount paths with extremely long UUID-based directory names (e.g., /var/lib/kubelet/plugins/kubernetes.io/csi/rbd.csi.ceph.com/[hash]/globalmount/[hash]). These paths exceed systemd’s 256-character unit name limit, causing frequent warnings in system logs:

systemd[1]: Mount point path '...' too long to fit into unit name, ignoring mount point.

Environment

  • Image/version of Ceph CSI driver : quay.io/cephcsi/cephcsi:v3.13.0
  • Helm chart version : 3.13.0
  • Kernel version : 5.15.0-125-generic
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) : krbd
  • Kubernetes cluster version : v1.30.4
  • Ceph cluster version : 18.2.4

Steps to Reproduce

Deploy Ceph CSI RBD driver (default configuration).
Create a PVC using the RBD StorageClass.
Observe systemd/journald logs on the node where the PVC is mounted:

tail -f /var/log/syslog| grep "too long"

Expected Behavior
No systemd warnings related to mount path length.

Actual Behavior
Persistent systemd warnings due to CSI-generated mount paths exceeding the 256-character unit name limit.

Workarounds Attempted

  1. Shortening CSI driver’s plugin-dir path (e.g., /csi instead of /var/lib/kubelet/plugins/...).
  2. Using symbolic links to shorten the path.
  3. Adding x-systemd.ignore to mountOptions in StorageClass.
  4. Systemd journal filtering (suppresses logs but does not fix the root cause).

None of these fully resolve the warnings.

Proposed Solutions

Add an option to truncate or shorten UUIDs in CSI-generated paths.
Allow configuring a custom "base mount directory" (e.g., /csi) to reduce path length.
Use shorter deterministic hashes (e.g., 12-character instead of 64-character).
Document an official workaround for systemd-based distributions.

Additional Context

  • Full systemd warning example:
    Feb 28 02:26:40 node1 systemd[1]: Mount point path   '/var/lib/kubelet/plugins/kubernetes.io/csi/rbd.csi.ceph.com/e10320c50ed3...d46f9b9e1-0000000000000006-b37fc701-0b00-448d-ad31-8831f15e5721' too long.  
    
  • Affects all environments where RBD PVCs are used with long UUIDs.

Question

Are there plans to address this issue in future releases? If not, could the Ceph CSI team provide guidance on a sustainable fix?

@nixpanic
Copy link
Member

The issue seems to have been reported quite a while back at systemd/systemd#26371. There does not seem to be a real conversation or suggestion to try out.

I wonder if it is needed to create a systemd mount unit for something that Kubernetes is in charge of... Maybe there is a mount option that can instruct systemd to ignore a particular mount point/action, and not create a unit for it?

Changing the hashing in the path name is not trivial. The PV has a hash that is generated by Kubernetes, it is part of the path where Ceph-CSI is expected to mount the volume. The volume_id (now 64 bytes, but 128 bytes allowed by the CSI Specification) is used to identify the RBD-image, which requires the Ceph Cluster, RADOS pool and UUID of the image. Any change we make there needs to be backwards compatible.

Possible solutions I can think of:

  1. a systemd/mount option or configuration value to ignore actions related to RBD-devices and/or CephFS filesystems
  2. reduce the size of the volume_id (internal/util/volid.go) for newly created volumes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants