Skip to content

Terraform module that creates Ceph cluster in Kubernetes using Rook Ceph Operator and Cluster helm charts

Notifications You must be signed in to change notification settings

log1cb0mb/terraform-helm-rook-ceph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terraform module for Rook Ceph Operator and Cluster

Terraform module used to create Ceph cluster in Kubernetes via Helm using following Rook resources:

  • Rook Ceph Operator: Starts the Ceph Operator, which will watch for Ceph CRs (custom resources)
  • Rook Ceph Cluster: Creates Ceph CRs that the operator will use to configure the cluster

Requirements

You'll need to set these up separately:

  • Helm provider available and configured against a Kubernetes cluster
  • Kubernetes cluster

Usage example

module "rook_ceph" {
  source = "git::git@github.com/log1cb0mb/terraform-helm-rook-ceph.git?ref=main"

  chart_name                  = "rook-ceph"
  rook_version                = "v1.8.8"
  enable_plugins_selinux      = "true"
  provisioner_replicas        = 2
  enable_selinux_relabeling   = false
  hostpath_privileged         = true
  enable_monitoring           = false
  enable_toolbox              = true
  ceph_version                = "quay.io/ceph/ceph:v16.2.7"
  hostpath_dir                = "/var/lib/rook"
  mon_count                   = 3
  mgr_count                   = 2
  device_filter               = "sdb"
  dns_zone                    = "example.com"
  ingress_enabled             = true
  ingress_class               = "nginx"
  cluster_issuer              = "letsencrypt-production"
  custom_blockpools           = [
    {
      name     = "bp-example"
      failure_domain  = "host"
      replicated_pool_size  = "3"
      crush_root = "dc1"
      sc_name = "bp-example"
      sc_enabled = true
      sc_isdefault = false
      sc_reclaim_policy = "Delete"
      sc_allow_volume_expansion = true
      mount_options = {}
      parameters = {}
    },
  ]
  custom_filesystems           = [
    {
      name     = "fs-example"
      failure_domain  = "host"
      metadata_replicated_pool_size  = 3
      data_replicated_pool_size  = 3
      sc_name = "fs-example"
      sc_enabled = true
      sc_isdefault = false
      sc_reclaim_policy = "Delete"
      sc_allow_volume_expansion = true
      mount_options = {}
      parameters = {}
    }
  ]
  custom_objectstores           = [
    {
      name     = "obj-example"
      failure_domain  = "host"
      metadata_replicated_pool_size  = 3
      data_erasure_data_chunks  = 2
      data_erasure_coding_chunks  = 1
      preserve_pool_ondelete = true
      object_gw_port = "80"
      object_gw_secure_port = ""
      object_gw_ssl_cert  = ""
      object_gw_instnces = 1
      healthcheck_bucket_interval = "60s"
      sc_enabled = true
      sc_name  = "obj-example"
      sc_reclaim_policy = "Delete"
      parameters = {}
    }
  ]
  fs_volumesnapshot_class     = [
    {
      enabled  = true
      name     = "fs-example"
      isdefault  = true
      deletion_policy  = "Delete"
      annotations = {}
      labels = {}
      parameters = {}
    }
  ]
  bp_volumesnapshot_class     = [
    {
      enabled  = true
      name     = "bp-example"
      isdefault  = false
      deletion_policy  = "Delete"
      annotations = {}
      labels = {}
      parameters = {}
    }
  ]

Inputs

Name Description Type Default
chart_name Chart name i.e rook-ceph or rook-ceph-cluster string rook-ceph
operator_namespace Namespace of the main rook operator string rook-ceph
helm_repository Rook Helm releases repository URL string https://charts.rook.io/release
rook_version Rook release version for operator and ceph-cluster string
enable_crds Create Rook CRDs bool true
enable_rbac Create RBAC resources bool true
enable_psp Pod Security Policy resources bool true
log_level he logging level for the operator: ERROR | WARNING | INFO | DEBUG string INFO
enable_rbd_driver Enable Ceph CSI RBD Driver bool true
enable_cephfs_driver Enable Ceph CSI CephFS Driver bool true
enable_plugins_selinux Enable SElinux for CSI Plugins pods bool false
provisioner_replicas Replicas for csi provisioner deployment number 2
allow_unsupported_version Allow starting unsupported ceph-csi image bool false
enable_csi_addons Enable the CSIAddons sidecar bool false
enable_selinux_relabeling SElinux relabling for volume mounts bool true
hostpath_privileged Writing to the hostPath required for Ceph mon and osd pods when using SElinux enabled hosts bool false
enable_monitoring Ceph monitoring, requires Prometheus to be pre-installed bool false
enable_toolbox Enable Ceph debugging pod deployment bool false
ceph_version Ceph image tag string quay.io/ceph/ceph:v16.2.7
hostpath_dir Path on the host where configuration data will be persisted string /var/lib/rook
mon_count Number of mons. Generally recommended to be 3.For highest availability, an odd number of mons should be specified number 3
mon_multiple_per_node Mons should only be allowed on the same node for test environments where data loss is acceptable bool false
mgr_count If HA of the mgr is needed, increase the count to 2 for Active/Standby. Rook will update the mgr services to match the active mgr number 2
mgr_multiple_per_node Multiple mgr pods to be allowed on same node bool false
enable_dashboard Enable CEPH Dashboard bool true
dashboard_ssl Serve Ceph dashboard using SSL bool true
networking Network provider configuration i.e Host Networking or multus string (WIP)
device_filter A regular expression for short kernel names of devices (e.g. sdb) that allows selection of devices to be consumed by OSDs string sdb
ingress_enabled Create ingress resource for Ceph dashboard bool true
ingress_class Ingress class name for Ceph dashboard in case using specific ingress controller string nginx
cluster_issuer Cluster issuer name for signing certificate for SSL Dashboard string
dns_zone DNS Zone for Ingress host FQDN string example.com
custom_blockpools Custom Ceph Block Pools in addition to default pool
list(object({
name = string
failure_domain = string
replicated_pool_size = number
crush_root = string
sc_name = string
sc_enabled = bool
sc_isdefault = bool
sc_reclaim_policy = string
sc_allow_volume_expansion = bool
mount_options = map(string)
parameters = map(string)
}))
[]
custom_filesystems Custom Ceph Filesystems in addition to default pool
list(object({
name = string
failure_domain = string
metadata_replicated_pool_size = number
data_replicated_pool_size = number
sc_name = string
sc_enabled = bool
sc_isdefault = bool
sc_reclaim_policy = string
sc_allow_volume_expansion = bool
mount_options = map(string)
parameters = map(string)
}))
[]
custom_objectstores Custom Ceph Object Stores in addition to default pool
list(object({
name = string
failure_domain = string
metadata_replicated_pool_size = number
data_erasure_data_chunks = number
data_erasure_coding_chunks = number
preserve_pool_ondelete = bool
object_gw_port = string
object_gw_secure_port = string
object_gw_ssl_cert = string
object_gw_instnces = number
healthcheck_bucket_interval = string
sc_enabled = bool
sc_name = string
sc_reclaim_policy = string
parameters = map(string)
}))
[]
fs_volumesnapshot_class CephFS Volume Snapshot Class
list(object({
enabled = bool
name = string
isdefault = bool
deletion_policy = string
annotations = map(string)
labels = map(string)
parameters = map(string)
}))
[]
bp_volumesnapshot_class RBD Volume Snapshot Class
list(object({
enabled = bool
name = string
isdefault = bool
deletion_policy = string
annotations = map(string)
labels = map(string)
parameters = map(string)
}))
[]

Terraform Requirements

Name Version
terraform >= 0.13.0
helm >= 3.x

Kubernetes version: >=1.13

About

Terraform module that creates Ceph cluster in Kubernetes using Rook Ceph Operator and Cluster helm charts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published