Ansible playbook to configure my Arm NASes:
The current iteration of the HL15 I'm running contains the following hardware:
- (Motherboard) ASRock Rack ALTRAD8UD-1L2T (specs)
- (Case) 45Homelab HL15 + backplane + PSU
- (PSU) Corsair RM750e
- (RAM) 8x Samsung 16GB 1Rx4 ECC RDIMM M393A2K40DB3-CWE PC25600
- (NVMe) Kioxia XG8 2TB NVMe SSD
- (CPU) Ampere Altra Q32-17
- (SSDs) 4x Samsung 8TB 870 QVO 2.5" SATA
- (HDDs) 6x Seagate EXOS 20TB SATA HDD
- (HBA) Broadcom MegaRAID 9405W-16i
- (Cooler) Noctua NH-D9 AMP-4926 4U
- (Case Fans) 6x Noctua NF-A12x25 PWM
- (Fan Hub) Noctua NA-FH1 8 channel Fan Hub
Some of the above links are affiliate links. I have a series of videos showing how I put this system together:
- Part 1: How efficient can I build the 100% Arm NAS?
- Part 2: Silencing the 100% Arm NAS—while making it FASTER?
The current iteration of the Raspberry Pi 5 SATA NAS I'm running contains the following hardware:
- (SBC) Raspberry Pi 5
- (HAT) Radxa Penta SATA HAT for Pi 5
- (SSDs) Samsung 870 QVO 8TB SATA SSD
- (microSD) Kingston Industrial 16GB A1
- (Network) Plugable 2.5GB USB Ethernet Adapter
- (Power) TMEZON 12V 5A AC adapter
Some of the above links are affiliate links. I have a series of videos showing how I put this system together:
- Part 1: The ULTIMATE Raspberry Pi 5 NAS
- Part 2: Big NAS, Lil NAS
The HL15 should not require any special prep, besides having Ubuntu installed. The Raspberry Pi 5 is running Debian (Pi OS) and needs its PCIe connection enabled. To do that:
-
Edit the boot config:
sudo nano /boot/firmware/config.txt
-
Add in the following config at the bottom and save the file:
dtparam=pciex1 dtparam=pciex1_gen=3
-
Reboot
Confirm the SATA drives are recognized with lsblk
.
Ensure you have Ansible installed, and can SSH into the NAS using ssh user@nas-ip-or-address
without entering a password, then run:
ansible-playbook main.yml
After the playbook runs, you should be able to access Samba shares, for example the hddpool/jupiter
share, by connecting to the server at the path:
smb://nas01.mmoffice.net/hddpool_jupiter
Until issue #2 is resolved, there is one manual step required to add a password for the jgeerling
user (one time). Log into the server via SSH, run the following command, and enter a password when prompted:
sudo smbpasswd -a jgeerling
The same thing goes for the Pi, if you want to access it's ZFS volume.
Backups of the primary NAS (nas01) to the secondary NAS (nas02) are handled using Sanoid (and it's included syncoid
replication tool).
Sanoid is configured on nas01 to store a set of monthly, daily, and hourly snapshots. Syncoid is run on cron on nas02 to pull snapshots nightly.
Sanoid should prune snapshots on nas01, and Syncoid on nas02.
You can check on snapshot health with:
- nas01:
sudo sanoid --monitor-snapshots && zfs list -t snapshot
- nas02:
zfs list -t snapshot
For example:
jgeerling@nas01:~$ sudo sanoid --monitor-snapshots
OK: all monitored datasets (hddpool/jupiter) have fresh snapshots
Following the 1-2-3 Backup Principle, I have an offsite replica of all my data stored on an Amazon S3 Glacier Deep Archive-backed bucket.
This keeps offsite storage costs minimal (about $1/TB/month), and using rclone
, it is easy enough to keep things in sync between my onsite backups and S3.
The S3 bucket is owned by IAM user rclone
, and is named mm-archive
.
Locally, rclone config
is set up with an Access Key and Secret Access Key for that rclone
IAM user, and allows NAS02 to synchronize directories straight into the Amazon S3 bucket.
Full documentation of the setup is in this GitHub issue.
TODO: We'll cross this bridge if we come to it. The only time I've ever had to retrieve a folder, I used rclone to sync down the directory but it was a bit of a hassle, since Deep Archive means you have to request files to be put back online for retrieval, and this can take 6-24 hours!
I like to verify the performance of my NAS storage pools on the device itself, using my disk-benchmark.sh
script.
You can run it by copying it to the server, making it executable, and running it with sudo
:
wget https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh
chmod +x disk-benchmark.sh
sudo MOUNT_PATH=/nvmepool/mercury TEST_SIZE=20g ./disk-benchmark.sh
If you're having trouble mounting a share or authenticating with Samba, run sudo watch smbstatus
to monitor connections to the server. Logs inside /var/log/samba
aren't useful by default.
# Check pool health (should return 'all pools are healthy')
zpool status -x
# List all zfs pools and datasets
zfs list
# List all zfs pool info
zpool list
# List single zfs pool info (verbose)
zpool status -v [pool_name]
# List all properties for a pool
zfs get all [pool_name]
# Scrub a pool manually (check progress with `zpool status -v`)
zpool scrub [pool_name]
# Monitor zfs I/O statistics (update every 2s)
zpool iostat 2
GPLv3 or later
Jeff Geerling