Skip to content
DatuX edited this page Oct 16, 2023 · 35 revisions

Reference manual

If you're new to zfs-autobackup you should read the Getting started page first. It shows you how to get things done.

Then read this full manual to understand how zfs-autobackup works, and what all the options are doing.

Both guides are complementary.

Usage

zfs-autobackup (options) BACKUP-NAME [TARGET-PATH]

The only required parameter is BACKUP-NAME.

Safe defaults

zfs-autobackup uses safe defaults such as:

  • Preserving all dataset properties. This can have its drawbacks to, for example Mounting)
  • Preserving full dataset paths.
  • Only modify snapshots that match the zfs-autobackup format.
  • Not rolling back or forcing anything.
  • Checking everything, failing early and in a verbose manner.
  • Don't do anything unexpected.

Keeping this in mind can help make more sense of the options described here: most of the options are to modify these safe defaults.

Testing and debugging

It's recommended to always use --verbose or -v to see whats going on. It makes debugging easier.

During initial setup and testing of a backup you should use --test. This will perform all the read-only operations, but will not change anything. It will show you exactly what it's going to do.

If you encounter a problem and want to see the exact ZFS commands, use --debug. This outputs all the underlying zfs-commands in a different color. To see the output of each command, use --debug-output.

Note that debug mode will abort on the first failed dataset, and show a stacktrace so it's not recommended for use in production.

SSH source and target options

zfs-autobackup can be used locally or remotely via ssh.

These options are for backing up to or from remote hosts via ssh:

  • --ssh-source USER@HOST: Source host to pull backup from.
  • --ssh-target USER@HOST: Target host to push backup to.

If you dont specify a source or target host, zfs-autobackup will operate locally.

Things like different ssh-ports should be configured in your ~/.ssh/config file. (Or the one specified with --ssh-config CONFIG-FILE)

Step 1: Selecting

This step selects the datasets that are part of the run.

Dataset property

Selection is done by a dataset property. The name of this property is the backup-name, formatted by --property-format. The default is autobackup:backup-name.

The zfs-autobackup property can have the following values:

  • true: Select the dataset and all its children.
  • false: Exclude the dataset and all its children.
  • child: Only select the children of the dataset, not the dataset itself.
  • parent: Only select the parent, but not the children. (supported in version 3.2 or higher)

If there are no datasets that have this property set then zfs-autobackup exits with an error.

Further exclusions

Datasets can also be excluded from selection by these options:

  • --exclude-received: Exclude datasets that have the origin of their autobackup:backup-name property as "received". This can avoid recursive replication between two backup partners.
  • --exclude-unchanged BYTES: Exclude datasets that have less than BYTES data changed since the last snapshot. (Use with proxmox HA replication)

Step 2: Snapshotting

In this step a snapshot is created on the datasets selected in step 1.

zfs-autobackup creates atomic snapshots per pool. This is a single zfs snapshot command that includes all the snapshots that need to be taken for that pool.

Snapshotting can be skipped with --no-snapshot. Using this option will result in only syncing existing snapshots.

Snapshot format

Snapshots are created using a specific naming format. This includes a timestamp that zfs-autobackup uses to determine when a snapshot can be destroyed by the Thinner.

It is possible to change this format by using --snapshot-format. Other snapshots that do not match this format are normally ignored by zfs-autobackup. Use --utc to use UTC for timestamps.

Pre- and post snapshot commands

You can run commands pre- and post-snapshotting with --pre-snapshot-cmd and --post-snapshot-cmd.

More info here.

Skipping conditions

Snapshot creation will be skipped for datasets that have no changes since the last snapshot.

This can be controlled by:

  • --min-change BYTES: Only create snapshot if enough bytes have changed. (default 1)
  • --allow-empty: If nothing has changed, still create empty snapshots. (Same as --min-change=0)

Other options

  • --set-snapshot-properties PROPERTY=VALUE,...: List of properties to set on the new snapshot.

Step 3: Synchronising

Syncronisation is done only if TARGET-PATH is specified. Otherwise zfs-autobackup is just a snapshot tool and stops after step 2.

For each selected source dataset it does the following steps:

Step 3.1: Planning

If the target dataset already exists:

  • We determine the Common snapshot
  • We check the GUID of the common snaphot, unless --no-guid-check is set.
  • We determine a list of incompatible snapshots that are in the way. (after our common snapshot)
  • If there isn't a valid common snapshot, this dataset fails and we continue with the next one.

We determine which snapshots are kept and which ones can be destroyed by the Thinner. Note that only our own snapshots (matching the --snapshot-format), are considered for deletion.

If --no-thinning is used, this list of obsolete snapshots will always be empty.

Step 3.2: Pre-clean

After planning, provided --no-thinning isn't used, we destroy obsolete snapshots on the source and target to save space during sync.

Step 3.3: Destroy incompatible snapshots

If the planner has detected incompatible snapshots, we will destroy them. But since this can be dangerous and is normally not needed, you have to enable this with --destroy-incompatible

Otherwise the dataset will fail.

Step 3.4: Transferring snapshots

Now the snapshots are actually transferred, unless --no-send is used.

If --other-snapshots is specified, we will also transfer snapshots that do not match our --snapshot-format. These other snapshots will never be destroyed.

For each snapshot we:

  • Check if we need to resume an aborted transfer.
  • Handle Encryption options. (--encrypt and --decrypt)
  • Transfer the data, using various Transfer options (--zfs-compressed, --compress, --send-pipe, --recv-pipe, --buffer, --rate)
  • Filter/set properties according to --set-properties and --filter-properties
  • Add/remove holds, unless --no-holds is used. (Use --hold-format to specify the name of this hold)

Just before the first snapshot is transferred, we do a rollback, if --rollback is specified.

If it's an initial transfer that created a new target dataset, we try to automount the target after the first snapshot is transferred.

We destroy obsolete snapshots from the planning phase as soon as possible. (--no-thinning effectively disables this )

Extra transfer options

  • --ignore-transfer-errors: Ignore ZFS transfer errors. It still checks if received filesystem exists. This is useful to ignore some acltype errors.
  • --clear-refreservation: Filter "refreservation" property. Recommended to save space. Same as --filter-properties refreservation.
  • --clear-mountpoint: Set property canmount=noauto for new datasets. Recommended, prevents mount conflicts. Same as --set-properties canmount=noauto. Also see Mounting
  • --strip-path N: Number of directories to strip from target path. An example is given in the Getting started guide.
  • --force: Use zfs -F option to force overwrite/rollback. Useful with --strip-path=1. Use with care!

Step 4: Handle missing datasets

Datasets that are missing or deselected, but still exist in the target-path are called missing datasets.

The handling of those is described here (--destroy-missing )

Thinner

The thinner decides when a snapshot is obsolete. Look at Thinner for more info. (--keep-source and --keep-target)

Running without root

In order to run zfs-autobackup without root permissions, you'll need to set a few ZFS permissions. The permissions required differ for receiving and sending.

On the machine you want to sync the dataset from, you'll need the send, hold, mount, snapshot, and destroy permissions. You can apply them like so:

root@source:~# zfs allow -u localuser mount,send,hold,snapshot,destroy rpool

On the receiving side, you will need the compression, mountpoint, create, mount, receive, rollback and destroy permissions:

root@target:~# zfs allow -u remoteuser compression,mountpoint,create,mount,receive,rollback,destroy tank/backups/rpool