Skip to content

Latest commit

 

History

History
119 lines (98 loc) · 2.83 KB

csi-dev.md

File metadata and controls

119 lines (98 loc) · 2.83 KB

NFS CSI driver development guide

How to build this project

  • Clone repo
$ mkdir -p $GOPATH/src/sigs.k8s.io/
$ git clone https://github.com/kubernetes-csi/csi-driver-nfs $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
  • Build CSI driver
$ cd $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
$ make
  • Run verification test before submitting code
$ make verify
  • If there is config file changed under charts directory, run following command to update chart file
helm package charts/latest/csi-driver-nfs -d charts/latest/

How to test CSI driver in local environment

Install csc tool according to https://github.com/rexray/gocsi/tree/master/csc

$ mkdir -p $GOPATH/src/github.com
$ cd $GOPATH/src/github.com
$ git clone https://github.com/rexray/gocsi.git
$ cd rexray/gocsi/csc
$ make build

Start CSI driver locally

$ cd $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
$ ./bin/nfsplugin --endpoint unix:///tmp/csi.sock --nodeid CSINode -v=5 &

0. Set environment variables

$ cap="1,mount,"
$ volname="test-$(date +%s)"
$ volsize="2147483648"
$ endpoint="unix:///tmp/csi.sock"
$ target_path="/tmp/targetpath"
$ params="server=127.0.0.1,share=/"

1. Get plugin info

$ csc identity plugin-info --endpoint "$endpoint"
"nfs.csi.k8s.io"    "v2.0.0"

2. Create a new nfs volume

$ value="$(csc controller new --endpoint "$endpoint" --cap "$cap" "$volname" --req-bytes "$volsize" --params "$params")"
$ sleep 15
$ volumeid="$(echo "$value" | awk '{print $1}' | sed 's/"//g')"
$ echo "Got volume id: $volumeid"

3. Publish a nfs volume

$ csc node publish --endpoint "$endpoint" --cap "$cap" --vol-context "$params" --target-path "$target_path" "$volumeid"

4. Unpublish a nfs volume

$ csc node unpublish --endpoint "$endpoint" --target-path "$target_path" "$volumeid"

6. Validate volume capabilities

$ csc controller validate-volume-capabilities --endpoint "$endpoint" --cap "$cap" "$volumeid"

7. Delete the nfs volume

$ csc controller del --endpoint "$endpoint" "$volumeid" --timeout 10m

8. Get NodeID

$ csc node get-info --endpoint "$endpoint"
CSINode

How to test CSI driver in a Kubernetes cluster

  • Set environment variable
export REGISTRY=<dockerhub-alias>
export IMAGE_VERSION=latest
  • Build container image and push image to dockerhub
# run `docker login` first
# build docker image
make container
# push the docker image
make push
  • Deploy a Kubernetes cluster and make sure kubectl get nodes works on your dev box.

  • Run E2E test on the Kubernetes cluster.

# install NFS CSI Driver on the Kubernetes cluster
make e2e-bootstrap

# run the E2E test
make e2e-test