Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pvc pending status Waiting for a volume to be created either by the external provisioner #682

Closed
mansoncui opened this issue Jun 6, 2024 · 5 comments

Comments

@mansoncui
Copy link

kubevirt dv import image pvc pending
kubectl describe pvc pvc-name error info :

Assuming an external populator will provision the volume
 Normal  ExternalProvisioning         9s (x8 over 64s)   persistentvolume-controller                                                     
Waiting for a volume to be created either by the external provisioner 'nfs.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

kubevirt dv http upload image yaml file:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: img-cirros
spec:
  source:
    http:
      url: http://10.2.xx.xx/images/cirros-0.5.1-x86_64-disk.img 
  pvc:
    storageClassName: "nfs-client"
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 1Gi
  • CSI Driver version: v4.7.0
  • Kubernetes version (use kubectl version): 1.28.2
  • OS (e.g. from /etc/os-release): centos7.3
  • Kernel (e.g. uname -a): 5.4.153-1.el7.elrepo.x86_64
  • Install tools:
  • Others:
@cccsss01
Copy link
Contributor

cccsss01 commented Jun 8, 2024

i'm having similiar issues

@andyzhangx
Copy link
Member

this error means the nfs csi driver is not installed or registered correctly on your cluster, pls provide controller logs follow by: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case1-volume-createdelete-failed

@luckury
Copy link

luckury commented Jun 18, 2024

How many PVs are in your cluster overall? We had a similar issue where we "exhausted" the amount of PVs that the cluster can hold with the NFS CSI driver. It seems to be 20 per (worker) node. Could be similar/same issue as in #649

I'll leave this as a reference for the devs since this NFS driver does not support this option?!
https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo

@Ryan-ZL-Lin
Copy link

Ryan-ZL-Lin commented Jun 26, 2024

I get a similar issue of Dynamic Volume Provisioning. Is there anyone who could provide some tips what's going wrong here?
Here is my log for your reference:

StorageClass

ubuntu@master:~$ kubectl describe StorageClass nfs-csi
Name:            nfs-csi
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-csi"},"mountOptions":["nfsvers=4.1"],"parameters":{"readOnly":"false","server":"30.60.90.8","share":"/redis"},"provisioner":"nfs.csi.k8s.io","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           nfs.csi.k8s.io
Parameters:            readOnly=false,server=30.60.90.8,share=/redis
AllowVolumeExpansion:  <unset>
MountOptions:
  nfsvers=4.1
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>
ubuntu@master:~$

PVC Error

ubuntu@master:~$ kubectl describe pvc test-nfs-pvc -n redis
Name:          test-nfs-pvc
Namespace:     redis
StorageClass:  nfs-csi
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
               volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  ExternalProvisioning  11s (x5 over 59s)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'nfs.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

csi-nfs2-controller

Log

ubuntu@master:~$ kubectl get pod -o wide -n kube-system | grep csi-nfs2-controller
csi-nfs2-controller-7db75c4bb4-mrpbh           4/4     Running   0               17h     30.60.90.30       worker1   <none>           <none>
ubuntu@master:~$ kubectl logs csi-nfs2-controller-7db75c4bb4-mrpbh -n kube-system
Defaulted container "csi-provisioner" out of: csi-provisioner, csi-snapshotter, liveness-probe, nfs
I0625 10:24:42.616299       1 feature_gate.go:249] feature gates: &{map[]}
I0625 10:24:42.616370       1 csi-provisioner.go:154] Version: v4.0.0
I0625 10:24:42.616380       1 csi-provisioner.go:177] Building kube configs for running in cluster...
I0625 10:24:43.619489       1 common.go:138] Probing CSI driver for readiness
I0625 10:24:43.625949       1 csi-provisioner.go:230] Detected CSI driver nfs2.csi.k8s.io
I0625 10:24:43.627697       1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I0625 10:24:43.628149       1 controller.go:732] Using saving PVs to API server in background
I0625 10:24:43.628949       1 leaderelection.go:250] attempting to acquire leader lease kube-system/nfs2-csi-k8s-io...
I0625 10:24:43.638329       1 leaderelection.go:260] successfully acquired lease kube-system/nfs2-csi-k8s-io
I0625 10:24:43.638437       1 leader_election.go:177] became leader, starting
I0625 10:24:43.639826       1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go/informers/factory.go:159
I0625 10:24:43.640228       1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:159
I0625 10:24:43.739146       1 controller.go:811] Starting provisioner controller nfs2.csi.k8s.io_worker1_9735c34d-0661-49c2-9935-fb0e791ae08a!
I0625 10:24:43.739196       1 volume_store.go:97] Starting save volume queue
I0625 10:24:43.739312       1 clone_controller.go:66] Starting CloningProtection controller
I0625 10:24:43.739407       1 clone_controller.go:82] Started CloningProtection controller
I0625 10:24:43.740641       1 reflector.go:351] Caches populated for *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848
I0625 10:24:43.741228       1 reflector.go:351] Caches populated for *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845
I0625 10:24:43.840610       1 controller.go:860] Started provisioner controller nfs2.csi.k8s.io_worker1_9735c34d-0661-49c2-9935-fb0e791ae08a!

Description

ubuntu@master:~$ kubectl describe pod csi-nfs2-controller-7db75c4bb4-mrpbh -n kube-system
Name:                 csi-nfs2-controller-7db75c4bb4-mrpbh
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      csi-nfs2-controller-sa
Node:                 worker1/30.60.90.30
Start Time:           Tue, 25 Jun 2024 10:24:41 +0000
Labels:               app=csi-nfs2-controller
                      app.kubernetes.io/instance=csi-driver-nfs2
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=csi-driver-nfs
                      app.kubernetes.io/version=v4.7.0
                      helm.sh/chart=csi-driver-nfs-v4.7.0
                      pod-template-hash=7db75c4bb4
Annotations:          <none>
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   30.60.90.30
IPs:
  IP:           30.60.90.30
Controlled By:  ReplicaSet/csi-nfs2-controller-7db75c4bb4
Containers:
  csi-provisioner:
    Container ID:  containerd://d5a515811b4362b9187694dad8a891fc0da57d500fa9eeb329bf9e38d7f098a2
    Image:         registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
    Image ID:      registry.k8s.io/sig-storage/csi-provisioner@sha256:de79c8bbc271622eb94d2ee8689f189ea7c1cb6adac260a421980fe5eed66708
    Port:          <none>
    Host Port:     <none>
    Args:
      -v=2
      --csi-address=$(ADDRESS)
      --leader-election
      --leader-election-namespace=kube-system
      --extra-create-metadata=true
      --timeout=1200s
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  400Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlzw2 (ro)
  csi-snapshotter:
    Container ID:  containerd://ac2471a21bb2e57f5c839e5ca25af8e090822e30d3a2ec3f278e1eeea4bde891
    Image:         registry.k8s.io/sig-storage/csi-snapshotter:v6.3.3
    Image ID:      registry.k8s.io/sig-storage/csi-snapshotter@sha256:f1bd6ee18c4021c1c94f29edfab89b49b6a4d1b800936c19dbef2d75f8202f2d
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=2
      --csi-address=$(ADDRESS)
      --leader-election-namespace=kube-system
      --leader-election
      --timeout=1200s
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  200Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlzw2 (ro)
  liveness-probe:
    Container ID:  containerd://31e0afe5b066a968a82844e7c99f2d3afee99d42db39ea7851a33b15eda1ce09
    Image:         registry.k8s.io/sig-storage/livenessprobe:v2.12.0
    Image ID:      registry.k8s.io/sig-storage/livenessprobe@sha256:5baeb4a6d7d517434292758928bb33efc6397368cbb48c8a4cf29496abf4e987
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --http-endpoint=localhost:29652
      --v=2
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlzw2 (ro)
  nfs:
    Container ID:  containerd://db2fc3aee23ad8af245d65a0d1f0c86bfa4a38cba58601ef62dc5667eae4fc35
    Image:         registry.k8s.io/sig-storage/nfsplugin:v4.7.0
    Image ID:      registry.k8s.io/sig-storage/nfsplugin@sha256:92e5585a12c6f7aa9f0766caaea9b236c166b42935cb2210c7f77eba9413d74f
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --nodeid=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --drivername=nfs2.csi.k8s.io
      --mount-permissions=0
      --working-mount-dir=/tmp
      --default-ondelete-policy=delete
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:43 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  200Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  http-get http://localhost:29652/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
    Environment:
      NODE_ID:        (v1:spec.nodeName)
      CSI_ENDPOINT:  unix:///csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /tmp from tmp-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlzw2 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  socket-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-qlzw2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                             node-role.kubernetes.io/controlplane:NoSchedule op=Exists
                             node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

csi-nfs2-node

log

ubuntu@master:~$ kubectl exec -it csi-nfs2-node-gwk69 -n kube-system -c nfs -- mount | grep nfs
30.60.90.8:/srv on /var/lib/kubelet/pods/c768438e-4799-4a17-87d8-f6c4dcea0a18/volumes/kubernetes.io~nfs/srv type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=30.60.90.7,local_lock=none,addr=30.60.90.8)
30.60.90.8:/srv/LLM_Model_Repo/inflight_batcher_llm on /var/lib/kubelet/pods/c768438e-4799-4a17-87d8-f6c4dcea0a18/volumes/kubernetes.io~nfs/models type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=30.60.90.7,local_lock=none,addr=30.60.90.8)
ubuntu@master:~$ kubectl exec -it csi-nfs2-node-q7d2w -n kube-system -c nfs -- mount | grep nfs
ubuntu@master:~$ kubectl exec -it csi-nfs2-node-zrckw -n kube-system -c nfs -- mount | grep nfs
30.60.90.8:/srv/LLM_Model_Repo/inflight_batcher_llm on /var/lib/kubelet/pods/e3bc3683-d03b-43dc-bfe9-234b171c1926/volumes/kubernetes.io~nfs/models type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=30.60.90.30,local_lock=none,addr=30.60.90.8)
30.60.90.8:/srv on /var/lib/kubelet/pods/e3bc3683-d03b-43dc-bfe9-234b171c1926/volumes/kubernetes.io~nfs/srv type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=30.60.90.30,local_lock=none,addr=30.60.90.8)
ubuntu@master:~$

description

csi-nfs2-node-gwk69

ubuntu@master:~$ kubectl describe po csi-nfs2-node-gwk69 -n kube-system
Name:                 csi-nfs2-node-gwk69
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      csi-nfs2-node-sa
Node:                 worker2/30.60.90.7
Start Time:           Tue, 25 Jun 2024 10:24:41 +0000
Labels:               app=csi-nfs2-node
                      app.kubernetes.io/instance=csi-driver-nfs2
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=csi-driver-nfs
                      app.kubernetes.io/version=v4.7.0
                      controller-revision-hash=86b6667759
                      helm.sh/chart=csi-driver-nfs-v4.7.0
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   30.60.90.7
IPs:
  IP:           30.60.90.7
Controlled By:  DaemonSet/csi-nfs2-node
Containers:
  liveness-probe:
    Container ID:  containerd://432e3d3635a850c9b7259d9d48e5c315d1776532607d7c1ad296e20eee0122b7
    Image:         registry.k8s.io/sig-storage/livenessprobe:v2.12.0
    Image ID:      registry.k8s.io/sig-storage/livenessprobe@sha256:5baeb4a6d7d517434292758928bb33efc6397368cbb48c8a4cf29496abf4e987
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --http-endpoint=localhost:39653
      --v=2
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8jqk (ro)
  node-driver-registrar:
    Container ID:  containerd://1e7f8b6bcc7bf69947610b77ae70e0b79d5ae83c9f58dbf83651c766384c43e9
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:c53535af8a7f7e3164609838c4b191b42b2d81238d75c1b2a2b582ada62a9780
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=2
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  exec [/csi-node-driver-registrar --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --mode=kubelet-registration-probe] delay=30s timeout=15s period=10s #success=1 #failure=3
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
      KUBE_NODE_NAME:         (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8jqk (ro)
  nfs:
    Container ID:  containerd://3de5727cd0301358da5a5c2271376d240d82b4349c849e7dc24c15bb0c980da8
    Image:         registry.k8s.io/sig-storage/nfsplugin:v4.7.0
    Image ID:      registry.k8s.io/sig-storage/nfsplugin@sha256:92e5585a12c6f7aa9f0766caaea9b236c166b42935cb2210c7f77eba9413d74f
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --nodeid=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --drivername=nfs2.csi.k8s.io
      --mount-permissions=0
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  300Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  http-get http://localhost:39653/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
    Environment:
      NODE_ID:        (v1:spec.nodeName)
      CSI_ENDPOINT:  unix:///csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8jqk (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-nfsplugin
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  Directory
  kube-api-access-n8jqk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>
ubuntu@master:~$

csi-nfs2-node-q7d2w

ubuntu@master:~$ kubectl describe po csi-nfs2-node-q7d2w -n kube-system
Name:                 csi-nfs2-node-q7d2w
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      csi-nfs2-node-sa
Node:                 master/30.60.90.19
Start Time:           Tue, 25 Jun 2024 10:24:41 +0000
Labels:               app=csi-nfs2-node
                      app.kubernetes.io/instance=csi-driver-nfs2
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=csi-driver-nfs
                      app.kubernetes.io/version=v4.7.0
                      controller-revision-hash=86b6667759
                      helm.sh/chart=csi-driver-nfs-v4.7.0
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   30.60.90.19
IPs:
  IP:           30.60.90.19
Controlled By:  DaemonSet/csi-nfs2-node
Containers:
  liveness-probe:
    Container ID:  containerd://c771a41bf04edf67196c02f071e17258c3a134e29ae53317d465560b4b55fda3
    Image:         registry.k8s.io/sig-storage/livenessprobe:v2.12.0
    Image ID:      registry.k8s.io/sig-storage/livenessprobe@sha256:5baeb4a6d7d517434292758928bb33efc6397368cbb48c8a4cf29496abf4e987
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --http-endpoint=localhost:39653
      --v=2
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w87z9 (ro)
  node-driver-registrar:
    Container ID:  containerd://4a6ccf4729d2c5c9c94f44084e47ce0816a857cf0fcdd82d1d3f0d2903402ef2
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:c53535af8a7f7e3164609838c4b191b42b2d81238d75c1b2a2b582ada62a9780
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=2
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  exec [/csi-node-driver-registrar --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --mode=kubelet-registration-probe] delay=30s timeout=15s period=10s #success=1 #failure=3
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
      KUBE_NODE_NAME:         (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w87z9 (ro)
  nfs:
    Container ID:  containerd://d3df7ab62ee9d7a2987fbb3727a3eb0144a09db1128b9a36724fbed4f6949dfd
    Image:         registry.k8s.io/sig-storage/nfsplugin:v4.7.0
    Image ID:      registry.k8s.io/sig-storage/nfsplugin@sha256:92e5585a12c6f7aa9f0766caaea9b236c166b42935cb2210c7f77eba9413d74f
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --nodeid=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --drivername=nfs2.csi.k8s.io
      --mount-permissions=0
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  300Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  http-get http://localhost:39653/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
    Environment:
      NODE_ID:        (v1:spec.nodeName)
      CSI_ENDPOINT:  unix:///csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w87z9 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-nfsplugin
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  Directory
  kube-api-access-w87z9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>
ubuntu@master:~$

csi-nfs2-node-zrckw

ubuntu@master:~$ kubectl describe po csi-nfs2-node-zrckw -n kube-system
Name:                 csi-nfs2-node-zrckw
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      csi-nfs2-node-sa
Node:                 worker1/30.60.90.30
Start Time:           Tue, 25 Jun 2024 10:24:41 +0000
Labels:               app=csi-nfs2-node
                      app.kubernetes.io/instance=csi-driver-nfs2
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=csi-driver-nfs
                      app.kubernetes.io/version=v4.7.0
                      controller-revision-hash=86b6667759
                      helm.sh/chart=csi-driver-nfs-v4.7.0
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   30.60.90.30
IPs:
  IP:           30.60.90.30
Controlled By:  DaemonSet/csi-nfs2-node
Containers:
  liveness-probe:
    Container ID:  containerd://f020c8c38f4c328e57b6e405b29d88c5f3b53aa649dc24d32221b5ed8cea3f83
    Image:         registry.k8s.io/sig-storage/livenessprobe:v2.12.0
    Image ID:      registry.k8s.io/sig-storage/livenessprobe@sha256:5baeb4a6d7d517434292758928bb33efc6397368cbb48c8a4cf29496abf4e987
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --http-endpoint=localhost:39653
      --v=2
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-45862 (ro)
  node-driver-registrar:
    Container ID:  containerd://4047a6f5d68b89c336825dc906403fc406b132cbbd19e7feb6b4fa29d461c2d8
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:c53535af8a7f7e3164609838c4b191b42b2d81238d75c1b2a2b582ada62a9780
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=2
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  exec [/csi-node-driver-registrar --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --mode=kubelet-registration-probe] delay=30s timeout=15s period=10s #success=1 #failure=3
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
      KUBE_NODE_NAME:         (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-45862 (ro)
  nfs:
    Container ID:  containerd://a3f5f2e95bb6570f4ab95523350d839f9f202704826df61a60278cee80c22526
    Image:         registry.k8s.io/sig-storage/nfsplugin:v4.7.0
    Image ID:      registry.k8s.io/sig-storage/nfsplugin@sha256:92e5585a12c6f7aa9f0766caaea9b236c166b42935cb2210c7f77eba9413d74f
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --nodeid=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --drivername=nfs2.csi.k8s.io
      --mount-permissions=0
    State:          Running
      Started:      Tue, 25 Jun 2024 10:24:42 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  300Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Liveness:  http-get http://localhost:39653/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
    Environment:
      NODE_ID:        (v1:spec.nodeName)
      CSI_ENDPOINT:  unix:///csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-45862 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-nfsplugin
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  Directory
  kube-api-access-45862:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>
ubuntu@master:~$

@andyzhangx
Copy link
Member

@Ryan-ZL-Lin that's because you are using drivername=nfs2.csi.k8s.io in nfs csi driver controller? if you want to change the drivername, you also need to change the provisioner in storage class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants