Skip to content

Storage Spaces Direct

jasper-zanjani edited this page Aug 3, 2020 · 4 revisions

Although a cluster can normally be created in the GUI Failover Cluster Manager, in order to use Storage Spaces Direct the system must be prevented from automatically creating storage, which necessitates creation in PowerShell with the NoStorage switch parameter, and then S2D must be enabled using Enable-ClusterStorageSpacesDirect. This command scans all cluster nodes for local, unpartitioned disks, which are added to a single storage pool and classified by media type in order to use the fastest disks for caching.

The recommended drive configuration for a node in an S2D cluster is a minimum of six drives, with at least 2 SSDs and at least 4 HDDs, with no RAID or other intelligence that cannot be disabled.

Caching is configured automatically, depending on the combination of drives present

  • NVMe + SSD: NVMe drives are configured as a write-only cache for the SSD drives
  • NVMe + HDD: NVMe drives are read/write cache
  • NVME + SSD + HDD: NVME are write-only for the SSD drives and read/write for HDD drives
  • SSD + HDD: SSD drives are read/write cache

Microsoft defined two deployment scenarios for Storage Spaces Direct:

  • Disaggregated which creates two separate clusters, one of which is a [Scale-out File Server][SoFS] dedicated to storage, essentially functioning as a SAN. This solution requires the [DCB][DCB] role for traffic management. At least two 10Gbps Ethernet adapters are recommended per node, preferably adapters that use RDMA.
  • Hyper-converged, where a single cluster node hosts VMs and storage. This solution is much less expensive because it requires less hardware and generates much less network traffic, but storage and compute can't scale independently: adding a node to storage necessarily entails adding one to the Hyper-V hosts, and vice versa.
Clone this wiki locally