Skip to content

kubernetes csi for bare metal deployments, uses local storage

License

Notifications You must be signed in to change notification settings

LoadingZhang/csi-lvm

 
 

Repository files navigation

CSI LVM Provisioner

Go Report Card

Overview

CSI LVM Provisioner utilizes local storage of Kubernetes nodes to provide persistent storage for pods.

It automatically creates hostPath based persistent volumes on the nodes and makes use of the Local Persistent Volume feature introduced by Kubernetes 1.10 but it's simpler to use than the built-in local volume feature in Kubernetes.

Underneath it creates a LVM logical volume on the local disks. A grok pattern, which disks to use can be specified.

This Provisioner is derived from the Local Path Provisioner.

Compare to Local Path Provisioner

Pros

Dynamic provisioning the volume using host path.

  • Currently the Kubernetes Local Volume provisioner cannot do dynamic provisioning for the host path volumes.
  • Support for volume capacity limit.
  • Performance speedup if more than one local disk is available because it can create lv´s which are stripe across all physical volumes.

Requirement

Kubernetes v1.12+.

Deployment

Installation

The deployments consists of two parts:

  • A controller deployment, which is registerd as storage controller and schedules the creation and deletion of volumes
  • A reviver daemonset, which is responsible for re-creating the mount-structure after a reboot

In this setup, the directory /tmp/csi-lvm/<name of the pv> will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in csi-lvm namespace by default.

The default grok pattern for disks to use is /dev/nvme[0-9]n*, please check if this matches your setup, otherwise, copy controller.yaml to your local machine and modify the value of CSI_LVM_DEVICE_PATTERN accordingly.

- name: CSI_LVM_DEVICE_PATTERN
  value: /dev/nvme[0-9]n*"

If this is set you can install the csi-lvm with:

kubectl apply -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/controller.yaml
kubectl apply -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/reviver.yaml

After installation, you should see something like the following:

$ kubectl -n csi-lvm get pod
NAME                                     READY     STATUS    RESTARTS   AGE
csi-lvm-controller-d744ccf98-xfcbk       1/1       Running   0          7m
csi-lvm-reviver-ndh46                    1/1       Running   0          7m

Check and follow the provisioner log using:

$ kubectl -n csi-lvm logs -f csi-lvm-controller-d744ccf98-xfcbk
I1021 14:09:31.108535       1 main.go:132] Provisioner started
I1021 14:09:31.108830       1 leaderelection.go:235] attempting to acquire leader lease  csi-lvm/metal-stack.io-csi-lvm...
I1021 14:09:31.121305       1 leaderelection.go:245] successfully acquired lease csi-lvm/metal-stack.io-csi-lvm
I1021 14:09:31.124339       1 controller.go:770] Starting provisioner controller metal-stack.io/csi-lvm_csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881!
I1021 14:09:31.126248       1 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"csi-lvm", Name:"metal-stack.io-csi-lvm", UID:"04da008c-36ec-4966-a4f6-c2028e69cdd5", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881 became leader
I1021 14:09:31.225917       1 controller.go:819] Started provisioner controller metal-stack.io/csi-lvm_csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881!

Usage

Create a hostPath backed Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   50Mi       RWO            Delete           Bound     default/lvm-pvc          csi-lvm                  4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc          Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   50Mi       RWO            csi-lvm        16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo lvm-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

Check the volume content:

$ kubectl exec volume-test cat /data/test
lvm-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pvc.yaml
kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

The volume content stored on the node will be automatically cleaned up. You can check the log of csi-lvm-controller-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

The configuration of the csi-lvm-controller is done via Environment variables:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: csi-lvm-controller
  namespace: csi-lvm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: csi-lvm-controller
  template:
    metadata:
      labels:
        app: csi-lvm-controller
    spec:
      serviceAccountName: csi-lvm-controller
      containers:
      - name: csi-lvm-controller
        image: metalstack/csi-lvm-controller:v0.6.1
        imagePullPolicy: IfNotPresent
        command:
        - /csi-lvm-controller
        args:
        - start
        env:
        - name: CSI_LVM_PROVISIONER_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CSI_LVM_PROVISIONER_IMAGE
          value: "metalstack/csi-lvm-provisioner:v0.6.1"
        - name: CSI_LVM_DEVICE_PATTERN
          value: "/dev/loop[0,1]"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: csi-lvm-reviver
  namespace: csi-lvm
spec:
  selector:
    matchLabels:
      app: csi-lvm-reviver
  template:
    metadata:
      labels:
        app: csi-lvm-reviver
    spec:
      serviceAccountName: csi-lvm-reviver
      containers:
      - name: csi-lvm-reviver
        image: metalstack/csi-lvm-provisioner:v0.6.1
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        env:
          - name: CSI_LVM_MOUNTPOINT
            value: "/tmp/csi-lvm"
        command:
        - /csi-lvm-provisioner
        args:
        - revivelvs
        volumeMounts:
          - mountPath: /tmp/csi-lvm
            name: data
            mountPropagation: Bidirectional
          - mountPath: /dev
            name: devices
          - mountPath: /lib/modules
            name: modules
      volumes:
        - hostPath:
            path: /tmp/csi-lvm
            type: DirectoryOrCreate
          name: data
        - hostPath:
            path: /dev
            type: DirectoryOrCreate
          name: devices
        - hostPath:
            path: /lib/modules
            type: DirectoryOrCreate
          name: modules

Definition

CSI_LVM_DEVICE_PATTERN is a grok pattern to specify which block devices to use for lvm devices on the node. This can be for example /dev/sd[bcde] if you want to use only /dev/sdb - /dev/sde. IMPORTANT: no wildcard (*) allowed currently.

PVC Striped, Mirrored

By default the LV´s are created in linear mode on the devices specified by the grok pattern, beginning on the first found device. If this is full, the next LV will be created on the next device and so forth.

If more than 1 device was found with the given pattern, two more options for the created lvs are available:

  • mirror: all block will be mirrored with one additional copy to a additional disk found if more than one disk is present.
  • striped: the pvc will be a stripe across all found block devices specified by the above grok pattern. If for example 4 disk where found, all blocks written are spread across 4 devices in chunks. This gives ~4 times the read/write performance for the volume, but also a 4 times higher risk of data loss in case a single disk fails.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc-striped
  namespace: default
  annotations:
    csi-lvm.metal-stack.io/type: "striped"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc-mirrored
  namespace: default
  annotations:
    csi-lvm.metal-stack.io/type: "mirror"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi

Uninstall

Before un-installation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass csi-lvm.

To uninstall, execute:

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/controller.yaml
kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/reviver.yaml

About

kubernetes csi for bare metal deployments, uses local storage

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 77.2%
  • Shell 15.2%
  • Makefile 4.8%
  • Dockerfile 2.8%