-
Notifications
You must be signed in to change notification settings - Fork 80
Namespace FS: Namespace Resource to connect to shared Filesystems
This page is obsolete.... Please refer to the new doc here - https://github.com/noobaa/noobaa-core/wiki/NSFS-on-Kubernetes
This is an experimental feature. Follow https://github.com/noobaa/noobaa-core/pull/6077
Namespace FS is a capability to create a Namespace Resource, which is the backing store type for namespace buckets, which uses a shared filesystem mounted on the node.
This is a step-by-step guide to set up this resource for the purpose of exploring this capability and test it.
The first step would be to create the filesystem mount on one or more of the cluster nodes, and then configure it on the Kubernetes control plane by creating a StorageClass, PV, PVC, and finally mount it in the noobaa endpoint pods.
Download this attached noobaa cli binary. It already uses by default the images i’ve built and pushed to dockerhub -
noobaa/noobaa-core:5.5.0-nsfs
noobaa/noobaa-operator:5.5.0-nsfs
Use the cli to install to the noobaa namespace:
noobaa install -n noobaa
I also suggest to update the current namespace to noobaa so you don’t need to add “-n noobaa” to all kubectl / noobaa commands:
kubectl config set-context --current --namespace noobaa
If you are updating from a previous version you will need to update the images manually:
kubectl patch deployment noobaa-operator --patch '{
"spec": { "template": { "spec": {
"containers": [{
"name": "noobaa-operator",
"image": "noobaa/noobaa-operator:5.5.0-nsfs"
}]
}}}
}'
kubectl patch noobaa noobaa --type merge --patch '{ "spec": { "image": "noobaa/noobaa-core:5.5.0-nsfs" } }'
Assuming the filesystem to expose is mounted in /nsfs in the node.
We will create a local PV that represents the mounted file system on the node at /nsfs.
Download and create the yamls attached below -
kubectl create -f nsfs-local-class.yaml
kubectl create -f nsfs-local-pv.yaml
kubectl create -f nsfs-local-pvc.yaml
nsfs-local-class.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nsfs-local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
nsfs-local-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nsfs-vol
spec:
storageClassName: nsfs-local
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /nsfs/
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: Exists
nsfs-local-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nsfs-vol
spec:
storageClassName: nsfs-local
resources:
requests:
storage: 1Ti
accessModes:
- ReadWriteMany
Update the noobaa endpoints deployment to mount the volume -
kubectl patch deployment noobaa-endpoint --patch '{
"spec": { "template": { "spec": {
"volumes": [{
"name": "nsfs",
"persistentVolumeClaim": {"claimName": "nsfs-vol"}
}],
"containers": [{
"name": "endpoint",
"env": [{ "name": "NAMESPACE_FS", "value": "/nsfs" }],
"volumeMounts": [{ "name": "nsfs", "mountPath": "/nsfs" }]
}]
}}}
}'
Create a fs namespace resource:
noobaa api pool_api create_namespace_resource '{
"name": "nsfs",
"nsfs_config": {
"fs_root_path": "/nsfs"
}
}'
Create a namespace bucket:
noobaa api bucket_api create_bucket '{
"name": "nsfs",
"namespace":{
"write_resource": {"resource": "nsfs"},
"read_resources": [{"resource": "nsfs"}]
}
}'
Application S3 config:
AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY
S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status)
BUCKET_NAME=nsfs
We might need to create the bucket folder manually on the mount point:
mkdir -p /nsfs/nsfs
chmod -R 777 /nsfs
Update the resource allocation to unlimited and scale up the number of endpoints:
kubectl patch noobaa noobaa --type merge --patch '{
"spec": {
"coreResources": {
"requests": null,
"limit": null
},
"dbResources": {
"requests": null,
"limit": null
},
"endpoints": {
"minCount": 8,
"maxCount": 8,
"resources": {
"requests": null,
"limit": null
}
}
}
}'