-
Notifications
You must be signed in to change notification settings - Fork 3
Storage technology solution trade off
- It has to be compatible with Kubernetes
- It must be capable of file and block storage
- It has to be able to create storage with ReadWriteMany (RWX) access.
To evaluate each product, we rely on their official pages, experience feedbacks, and different benchmarks and analysis.
Software | Community | Support |
---|---|---|
Kadalu (based on GluserFS) | Kadalu: 340 stars / 53 forks / 22 contributors ; GlusterFS: 3.3k stars / 940 forks / 239 contributors | developed by Kadalu Storage |
Longhorn | 3.1k stars / 408 forks / 40 contributors | Originaly developed by Rancher / CNCF sandbox project |
OpenEBS | 7.1k stars / 883 forks / 180 contributors | Oriiginaly developed by MayaData / CNCF sandbox project |
Rook / Ceph | Rook: 9k stars / 2.1k forks / 345 contributors ; Ceph: 9.7k stars / 4.5k forks / 1.119 contributors | Rook: CNCF graduated project ; Ceph propulsed by Red Hat |
LINSTOR | 442 stars / 33 forks / 14 contributors | developped by LINBIT |
- file storage for Kubernetes based on Gluster
Cons:
- still alpha project (current version is 0.8.6)
- it is not a block storage solution
- manage LVM, ZFS
- replication with DRBD: synchronous or asynchronous
- snapshots, disaster recovery
- encryption at rest
Cons:
- no native file storage service
- block storage solution designed for Kubernetes
- hyper-converged storage
- synchronous replication
- support RWX via NFS
- snapshot, backup disaster recovery
- easy to use
- provide a dashboard
- Prometheus endpoint
Cons:
- NFS Longhorn does not provide HA yet (#2293)
- It does not support Quality-of-Service (QoS) feature (#750)
- It does not handle asynchronous replication (#1242)
- block storage solution designed for Kubernetes
- copy-on-write
- hyperconverged storage
- several backend
- local PV (hostpath, device, ZFS, LVM, Rawfile)
- Jiva (based on Longhorn)
- cStor (baked by OpenEBS, zfs in userspace)
- incremental snapshots / clones
- backup, disaster recovery with Velero
- Mayastor (written in Rust, iSCSI/NVMe-oF based)
- RWX through Dynamic NFS Volume Provisioner, NFS server provisioner stable helm chart
- synchronous replication for Jiva, cStor and Mayastor
- commercial support
- simple to manage
- provide samples dashboards for grafana
Cons:
- Mayastor is still beta
- Mayastor has no snapshots or cloning capabilities
- Dynamic NFS Volume Provisioner is in beta state
- NFS server provisioner stable helm chart is deprecated and no longer supported
- Rook, an operator of Ceph clusters
- Rook abstract Ceph complexity
- Ceph: block (Ceph RBD), object (Ceph RGW) and file (CephFS) storage
- Ceph container storage interface (CSI) support
- RBD: snapshots, clones
- CephFS: snapshots, clones
- RWX possible with CephFS
- hyper-converged storage
- dashboard available
- support NFS exports with NFS-Ganesha
Cons:
- Ceph CSI driver does not support QoS (#521)
Head to References to view different benchmarks.
Longhorn, OpenEBS, and Rook/Ceph are all projects that meet the requirements stated above. They are designed for Kubernetes, provide block and file storage, and support RWX access. Ceph has proven its value for several years, running many production clusters at scale. Rook graduated from CNCF. However, Longhorn and OpenEBS are sandbox CNCF projects and still maturing.
Ceph provides multiple storage services at an object level (Ceph object storage), a block level (Ceph block storage), and a file level (Ceph file system). In Kubernetes, a block is not available for RWX but only ReadOnlyMany (ROX) or ReadWriteOnce(RWO). Rook and the Ceph CSI driver support RBD and CephFS. Thus we can use Ceph RBD for RWO use case and rely on CephFS when there is a need for RWX access. In addition, Ceph is also the most open-source storage solution used in Kubernetes production clusters in 2020 and has a strong community.