Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[hyperkit minikube] NFS Provisioner going in crash loop #2597

Closed
backtrackshubham opened this issue Jan 20, 2022 · 20 comments
Closed

[hyperkit minikube] NFS Provisioner going in crash loop #2597

backtrackshubham opened this issue Jan 20, 2022 · 20 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@backtrackshubham
Copy link

backtrackshubham commented Jan 20, 2022

What happened: I used to use kind on docker desktop and was happily deploying nfs server in k8 using helm which is using image quay.io/kubernetes_incubator/nfs-provisioner:v2.3.0 now I have recently switched to minikube with no kubernetes to leverage the docker inside of it obviously because much resources were taken by the previous one, now when I am deploying the same nfs server on kind cluster with same config the pod for nfs provisioner goes into crash loop, now this happens on kind cluster in minikubes' docker when I install nfs server, with no kubernetes, however, if i use minikube with kubernetes and deploy same NFS server using helm it runs successfully but as I try to submit a PVC it says it can't provision the storage (even when the amount of storage i asked was 1Mi) i understand that could be something related to minikube but this issue is there, I am using minikube with hyperkit driver, I tried to debugg the issue to gather more information but couldn't do it, so I would also like to request the direction I should go to debug this, or possibly make this work, also some hints on what might be happenig.

What you expected to happen: KinD cluster should be more agnostic towards different environments.

How to reproduce it (as minimally and precisely as possible):
Step 1: Start minikube with no kubernetes
minikube start --driver hyperkit --no-kubernetes; eval $(minikube -p minikube docker-env)
Step 2:
Create KinD cluster
kind create cluster
Step 3: Install NFS server (if you have helm else click here)

NFS yamls

# Source: nfs-server-provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
  name: nfs-provisioner-nfs-server-provisioner
---
# Source: nfs-server-provisioner/templates/storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
provisioner: cluster.local/nfs-provisioner-nfs-server-provisioner
reclaimPolicy: Delete

allowVolumeExpansion: true

mountOptions:
  - vers=3
---
# Source: nfs-server-provisioner/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-nfs-server-provisioner
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
---
# Source: nfs-server-provisioner/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
  name: nfs-provisioner-nfs-server-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nfs-provisioner-nfs-server-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner-nfs-server-provisioner
    namespace: default
---
# Source: nfs-server-provisioner/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nfs-provisioner-nfs-server-provisioner
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
spec:
  type: ClusterIP
  ports:
    - port: 2049
      targetPort: nfs
      protocol: TCP
      name: nfs
    - port: 2049
      targetPort: nfs-udp
      protocol: UDP
      name: nfs-udp
    - port: 32803
      targetPort: nlockmgr
      protocol: TCP
      name: nlockmgr
    - port: 32803
      targetPort: nlockmgr-udp
      protocol: UDP
      name: nlockmgr-udp
    - port: 20048
      targetPort: mountd
      protocol: TCP
      name: mountd
    - port: 20048
      targetPort: mountd-udp
      protocol: UDP
      name: mountd-udp
    - port: 875
      targetPort: rquotad
      protocol: TCP
      name: rquotad
    - port: 875
      targetPort: rquotad-udp
      protocol: UDP
      name: rquotad-udp
    - port: 111
      targetPort: rpcbind
      protocol: TCP
      name: rpcbind
    - port: 111
      targetPort: rpcbind-udp
      protocol: UDP
      name: rpcbind-udp
    - port: 662
      targetPort: statd
      protocol: TCP
      name: statd
    - port: 662
      targetPort: statd-udp
      protocol: UDP
      name: statd-udp
  selector:
    app: nfs-server-provisioner
    release: nfs-provisioner
---
# Source: nfs-server-provisioner/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-provisioner-nfs-server-provisioner
  labels:
    app: nfs-server-provisioner
    chart: nfs-server-provisioner-1.1.3
    heritage: Helm
    release: nfs-provisioner
spec:
  # TODO: Investigate how/if nfs-provisioner can be scaled out beyond 1 replica
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server-provisioner
      release: nfs-provisioner
  serviceName: nfs-provisioner-nfs-server-provisioner
  template:
    metadata:
      labels:
        app: nfs-server-provisioner
        chart: nfs-server-provisioner-1.1.3
        heritage: Helm
        release: nfs-provisioner
    spec:
      # NOTE: This is 10 seconds longer than the default nfs-provisioner --grace-period value of 90sec
      terminationGracePeriodSeconds: 100
      serviceAccountName: nfs-provisioner-nfs-server-provisioner
      containers:
        - name: nfs-server-provisioner
          image: "quay.io/kubernetes_incubator/nfs-provisioner:v2.3.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: nfs
              containerPort: 2049
              protocol: TCP
            - name: nfs-udp
              containerPort: 2049
              protocol: UDP
            - name: nlockmgr
              containerPort: 32803
              protocol: TCP
            - name: nlockmgr-udp
              containerPort: 32803
              protocol: UDP
            - name: mountd
              containerPort: 20048
              protocol: TCP
            - name: mountd-udp
              containerPort: 20048
              protocol: UDP
            - name: rquotad
              containerPort: 875
              protocol: TCP
            - name: rquotad-udp
              containerPort: 875
              protocol: UDP
            - name: rpcbind
              containerPort: 111
              protocol: TCP
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
            - name: statd
              containerPort: 662
              protocol: TCP
            - name: statd-udp
              containerPort: 662
              protocol: UDP
          securityContext:
            capabilities:
              add:
                - DAC_READ_SEARCH
                - SYS_RESOURCE
          args:
            - "-provisioner=cluster.local/nfs-provisioner-nfs-server-provisioner"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: nfs-provisioner-nfs-server-provisioner
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: data
              mountPath: /export
      volumes:
        - name: data
          emptyDir: {}

helm repo add stable https://charts.helm.sh/stable && \
# For deprecation https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/issues/27

helm install nfs-provisioner stable/nfs-server-provisioner

Anything else we need to know?:

  • Logs form the pod when I run it on kind cluster
I0119 22:08:29.763090       1 main.go:64] Provisioner cluster.local/nfs-provisioner-nfs-server-provisioner specified
I0119 22:08:29.763261       1 main.go:88] Setting up NFS server!
F0119 22:08:33.270755       1 main.go:91] Error setting up NFS server: rpc.statd failed with error: signal: killed, output: 
  • The pod runs fine if I use docker run , so to me it seems not a problem with minikube docker, something internal in kind
  • Even runs on k8 with minikube but can't provision storage

Environment:

  • kind version: (use kind version):
    kind v0.11.1 go1.16.4 darwin/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
  • Docker version: (use docker info):
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 5
  Running: 5
  Paused: 0
  Stopped: 0
 Images: 15
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.202
 Operating System: Buildroot 2021.02.4
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 5.815GiB
 Name: minikube
 ID: UVRJ:5CMJ:CRH3:5GVD:MEWJ:7ODB:DJYI:OFF3:6EIV:SZBF:HY2N:6QXN
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
  provider=hyperkit
 Experimental: false
 Insecure Registries:
  10.96.0.0/12
  192.0.0.0/24
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
  • OS (e.g. from /etc/os-release): Linux minikube 4.19.202 #1 SMP Wed Oct 27 22:52:27 UTC 2021 x86_64 GNU/Linux
@backtrackshubham backtrackshubham added the kind/bug Categorizes issue or PR as related to a bug. label Jan 20, 2022
@aojea
Copy link
Contributor

aojea commented Jan 20, 2022

it works for me in linux

kubectl get pods nfs-provisioner-nfs-server-provisioner-0
NAME                                       READY   STATUS    RESTARTS   AGE
nfs-provisioner-nfs-server-provisioner-0   1/1     Running   0          38s

somebody has to rule out if this is a problem of the docker desktop vm kernel/linux settings in mac

F0119 22:08:33.270755 1 main.go:91] Error setting up NFS server: rpc.statd failed with error: signal: killed, output:

the message may indicate something in that area

@backtrackshubham
Copy link
Author

So @aojea thanks for trying that out, this works in docker desktop on mac, but it doesn't work when minikube is used on a mac with hyperkit driver and --no-kubernetes flag, and then kind cluster is created and on that this nfs server is deployed

@aojea
Copy link
Contributor

aojea commented Jan 20, 2022

, but it doesn't work when minikube is used on a mac with hyperkit driver and --no-kubernetes flag, and then kind cluster is created and on that this nfs server is deployed

uff, I don't know how this combo works, maybe better ask @afbjorklund or ask in minikube directly

@afbjorklund
Copy link
Contributor

afbjorklund commented Jan 20, 2022

the "minikube start --no-kubernetes --vm" command is a complicated way to run docker-machine

it got more popular when Docker finally dropped the other project 2021 (after ignoring it since 2019)

See also https://github.com/docker-archive/toolbox

Theoretically it could be used for other runtimes too...

Like: podman-machine, nerdctl-machine, another-machine

It runs the (cr) provisioner, but not the (k8s) bootstrapper.

@afbjorklund

This comment has been minimized.

@afbjorklund
Copy link
Contributor

afbjorklund commented Jan 20, 2022

I seem to have looked at this old ugly Fedora installation of a userspace NFS server before...

https://github.com/kubernetes-retired/external-storage/tree/nfs-provisioner-v2.3.0/nfs/deploy/docker/

It was something about not touching with a ten foot pole ?

Hopefully they will make a new release of a ganesha provider.

https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner

Maybe the new fc35 base image works better, than the old fc30 one ?

https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/blob/master/deploy/base/

Not sure if kind has any supported ReadWriteMany storage provisioner, that works better OOTB

@BenTheElder
Copy link
Member

what @afbjorklund said 🙃

So to be clear NFS support depends on the host / VM (in this case the hyperkit VM from docker-machine / minikube).

I'm not allowed to run VMs on my corporate owned machines, in CI we have only linux (see existing issues about macOS and Windows) and I'm a bit too overloaded to dig into setting this up elsewhere tbh.

#1487 has more on NFS, it should work, generally. #1487 (comment) (Actually I see you are already on this thread previously, but for reference to others reading this issue)

Not sure if kind has any supported ReadWriteMany storage provisioner, that works better OOTB

I don't think so, AFAIK the best option for RWM is NFS on most providers, and generally just installing the community NFS provisioner. If this is outdated I'm not sure what to recommend instead.

It should also be noted per #1487 that NFS of any sort will not work in kind on particularly old kernels as NFS-on-overlayfs was not supported yet. (kernels older than 4.15)

@backtrackshubham
Copy link
Author

Thanks for the input @BenTheElder @afbjorklund but yeah if wanna start to debug the issue from where shall I start or where to dig in, any suggestions would be really appreciated,
Thanks & regards

@afbjorklund
Copy link
Contributor

The minikube OS was supposed to have mount.nfs enabled, and while the kernel is old it is at least 4.19

@backtrackshubham
Copy link
Author

And what made me think that it could be some thing related to kind due to this test

if i use minikube with kubernetes and deploy same NFS server using helm it runs successfully but as I try to submit a PVC it says it can't provision the storage (even when the amount of storage i asked was 1Mi)

@afbjorklund
Copy link
Contributor

afbjorklund commented Jan 21, 2022

Here is what I saw in the logs:

[  469.957089] Out of memory: Kill process 6209 (rpc.statd) score 1623 or sacrifice child

This was on a 4 GiB 3900 MiB docker host:

                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3744         663        2094         638         987        2227
Swap:             0           0           0

@backtrackshubham
Copy link
Author

Yeah kind of same and here are the logs from the pods when ran minikube with kubernetes

I0121 08:31:26.385342       1 main.go:64] Provisioner cluster.local/nfs-provisioner-nfs-server-provisioner specified
I0121 08:31:26.385393       1 main.go:88] Setting up NFS server!
I0121 08:31:26.517679       1 server.go:149] starting RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0121 08:31:26.517781       1 server.go:160] ending RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0121 08:31:26.518512       1 server.go:134] Running NFS server!

Outputs

$ kubectl get all 
NAME                                           READY   STATUS    RESTARTS   AGE
pod/nfs-provisioner-nfs-server-provisioner-0   1/1     Running   0          4m28s

NAME                                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                                                     AGE
service/kubernetes                               ClusterIP   10.96.0.1      <none>        443/TCP                                                                                                     12m
service/nfs-provisioner-nfs-server-provisioner   ClusterIP   10.97.20.244   <none>        2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP   4m28s

NAME                                                      READY   AGE
statefulset.apps/nfs-provisioner-nfs-server-provisioner   1/1     4m28s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
pvc-34d31004-cf07-423c-8ee6-c4ca649e4e65   1Mi        RWX            Delete           Bound    default/oms-dev-common-pvc   nfs                     11s
$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
oms-dev-common-pvc   Bound    pvc-34d31004-cf07-423c-8ee6-c4ca649e4e65   1Mi        RWX            nfs            15s

Also I could see when I install NFS provisioner on this version (minikube + k8) it gets deployed as a pod

$ docker ps
CONTAINER ID   IMAGE                                          COMMAND                  CREATED          STATUS          PORTS     NAMES
fc8c44cdf827   quay.io/kubernetes_incubator/nfs-provisioner   "/nfs-provisioner -p…"   6 minutes ago    Up 6 minutes              k8s_nfs-server-provisioner_nfs-provisioner-nfs-server-provisioner-0_default_613d6864-5a19-4f6d-9318-ed04587787d9_0
123ca2a8af43   k8s.gcr.io/pause:3.6                           "/pause"                 6 minutes ago    Up 6 minutes              k8s_POD_nfs-provisioner-nfs-server-provisioner-0_default_613d6864-5a19-4f6d-9318-ed04587787d9_0
4311d106992b   6e38f40d628d                                   "/storage-provisioner"   14 minutes ago   Up 14 minutes             k8s_storage-provisioner_storage-provisioner_kube-system_8150896d-8950-4c05-a59e-ae716c8b3110_1
6b756ccc6d89   a4ca41631cc7                                   "/coredns -conf /etc…"   14 minutes ago   Up 14 minutes             k8s_coredns_coredns-64897985d-phstv_kube-system_c07355b9-f94e-4958-ae60-6e3c6809708e_0
3dc2e066a041   b46c42588d51                                   "/usr/local/bin/kube…"   14 minutes ago   Up 14 minutes             k8s_kube-proxy_kube-proxy-k9454_kube-system_c0c2a552-8e90-44d2-a933-7c871e978170_0
b08bf689a4de   k8s.gcr.io/pause:3.6                           "/pause"                 14 minutes ago   Up 14 minutes             k8s_POD_coredns-64897985d-phstv_kube-system_c07355b9-f94e-4958-ae60-6e3c6809708e_0
0dddee994504   k8s.gcr.io/pause:3.6                           "/pause"                 14 minutes ago   Up 14 minutes             k8s_POD_kube-proxy-k9454_kube-system_c0c2a552-8e90-44d2-a933-7c871e978170_0
0eb59a55c1af   k8s.gcr.io/pause:3.6                           "/pause"                 14 minutes ago   Up 14 minutes             k8s_POD_storage-provisioner_kube-system_8150896d-8950-4c05-a59e-ae716c8b3110_0
cfd5373abab6   25f8c7f3da61                                   "etcd --advertise-cl…"   15 minutes ago   Up 15 minutes             k8s_etcd_etcd-minikube_kube-system_53f7d2a0b096e7f62986937402c0882a_0
5f7a7143baf1   b6d7abedde39                                   "kube-apiserver --ad…"   15 minutes ago   Up 15 minutes             k8s_kube-apiserver_kube-apiserver-minikube_kube-system_9c43496805e15da0a975907cd2441cd4_0
d2e33df4ed36   71d575efe628                                   "kube-scheduler --au…"   15 minutes ago   Up 15 minutes             k8s_kube-scheduler_kube-scheduler-minikube_kube-system_b8bdc344ff0000e961009344b94de59c_0
8d580f0f239c   f51846a4fd28                                   "kube-controller-man…"   15 minutes ago   Up 15 minutes             k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_d3f0dbc1c3a23fddbc9f30b9e08c775e_0
963966567a0e   k8s.gcr.io/pause:3.6                           "/pause"                 15 minutes ago   Up 15 minutes             k8s_POD_kube-controller-manager-minikube_kube-system_d3f0dbc1c3a23fddbc9f30b9e08c775e_0
87955536d3e6   k8s.gcr.io/pause:3.6                           "/pause"                 15 minutes ago   Up 15 minutes             k8s_POD_kube-apiserver-minikube_kube-system_9c43496805e15da0a975907cd2441cd4_0
d9f0e41d3d43   k8s.gcr.io/pause:3.6                           "/pause"                 15 minutes ago   Up 15 minutes             k8s_POD_etcd-minikube_kube-system_53f7d2a0b096e7f62986937402c0882a_0
4d40086a9c18   k8s.gcr.io/pause:3.6                           "/pause"                 15 minutes ago   Up 15 minutes             k8s_POD_kube-scheduler-minikube_kube-system_b8bdc344ff0000e961009344b94de59c_0


And this i when I start minikube ***without*** kubernetes and deploy nfs on kind cluster with is deployed on minikube docker env
I0119 22:08:29.763090       1 main.go:64] Provisioner cluster.local/nfs-provisioner-nfs-server-provisioner specified
I0119 22:08:29.763261       1 main.go:88] Setting up NFS server!
F0119 22:08:33.270755       1 main.go:91] Error setting up NFS server: rpc.statd failed with error: signal: killed, output: 

Now I have less information on how things are working above but if some how I start the minikube without k8 and deploy the container for nfs provisioner server out side as and pod and then reference that in kind deployments would it work ? , also I checked the command how provisioner is being started up "/nfs-provisioner -provisioner=cluster.local/nfs-provisioner-nfs-server-provisioner" so if i start it out side as a container then what will be this value be doing cluster.local/nfs-provisioner-nfs-server-provisioner

@afbjorklund
Copy link
Contributor

afbjorklund commented Jan 21, 2022

I think it is the usual craziness of systemd, raising the number of files to require lots and lots of memory.

Like https://bugzilla.redhat.com/show_bug.cgi?id=1796545

Same bug that is hanging apt and other tools, when running on older distributions (not accounting for it)

LimitNOFILE=1024:4096


minikube: 1024

            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14111
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14111
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

kind: 1048576

root@kind-control-plane:/# ulimit -a
real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) unlimited
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 14111
max locked memory           (kbytes, -l) 64
max memory size             (kbytes, -m) unlimited
open files                          (-n) 1048576
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) unlimited
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

@BenTheElder
Copy link
Member

#1487 (comment) this should have been patched by libtirpc update, which kind at least should be shipping in the latest version. the ulimit difference should be from having systemd 240+ and patched libtirpc should be able to handle it 🤔

@backtrackshubham
Copy link
Author

Hey @BenTheElder @afbjorklund thanks for the cool tip , I just increased the ulimit for open files to sudo sysctl -w fs.nr_open=2057152 and it worked perfectly fine

@afbjorklund
Copy link
Contributor

I opened a special issue, for users wanting to run minikube (kubernetes) on top of minikube (docker):

I'm guessing that more people will get desperate, when they can't use the "Free" version anymore ?

https://www.docker.com/blog/updating-product-subscriptions/

@backtrackshubham
Copy link
Author

Also I would like to see If we could somehow leverage https://multipass.run/

@afbjorklund
Copy link
Contributor

I think you want to open a new issue for which "docker" to use in kind, it is off-topic here.

@BenTheElder BenTheElder changed the title NFS Provisioner going in crash loop [hyperkit minikube] NFS Provisioner going in crash loop Jan 25, 2022
@BenTheElder BenTheElder added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jan 25, 2022
@BenTheElder
Copy link
Member

#2597 (comment) perhaps systemd/libtirpc is older on the hyperkit VM?

@BenTheElder
Copy link
Member

I don't think there's anything else for us to do here, this issue is more than a year old with nothing further to follow-up on at the moment.

@BenTheElder BenTheElder self-assigned this May 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants