Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

syncoid - Removing poolname because no matching snapshots were found NEWEST SNAPSHOT: syncoid_hostname_2021-04-07... #635

Closed
slrslr opened this issue Apr 7, 2021 · 8 comments

Comments

@slrslr
Copy link

slrslr commented Apr 7, 2021

Foreword: to save reader's time, i highlighted important parts in bold text:

Hello,
i want to sync my pool to different drive, so i have first tried commands like:

zfs snapshot -r pool1@snapshot_name
sudo zfs send -Rv pool1@snapshot_name | sudo zfs receive -Fdus pool2
the -F switch was unable to transfer encrypted dataset ("zfs receive -F cannot be used to destroy an encrypted filesystem"?), so i had to transfer the dataset alone, without -F switch:
sudo zfs send -Rwv pool1/enc@snapshot03 | sudo zfs receive -dus pool2/enc
(-w switch because possible error "may not be sent with properties without the raw flag"), but it created dataset wrongly as a loop2/enc/enc
i had to use sudo zfs rename pool2/enc pool2/enc2;sudo zfs rename pool2/enc2/enc pool2/enc
after i rebooted and connected drive differently, the resume not worked using token (zfs get all|grep token)
sudo zfs send -t 1-long-phrase-here|sudo zfs receive -Fdus pool2/enc

cannot receive resume stream: kernel modules must be upgraded to receive this stream.

i have also tried to rename datasets on backup drive and some have different keys than on source drive pool

so i have downloaded syncoid, made it executable:
wget https://raw.githubusercontent.com/jimsalterjrs/sanoid/master/syncoid -O ~/apps/syncoid;chmod -x ~/apps/syncoid
and ran:
./syncoid primarypoolname backuppoolname

CRITICAL ERROR: Target backuppoolname exists but has no snapshots matching with pool!
Replication to target would require destroying existing
target. Cowardly refusing to destroy your existing target.

so not knowing how to sync properly preserving data i already transferred, i used "--force-delete" switch to previous syncoid command.
Weird output: It was rapidly printing log lines of same type, time:

Removing backuppoolname because no matching snapshots were found
NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00
Removing backuppoolname because no matching snapshots were found
NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00

flood of these messages seems weird.

QUESTIONS:
1)
Why syncoid was flooding the screen with that messages while drive was working, instead of just marking that drive as empty/clean and start copying the data. I would guess zpool destroy command is fast.
2)
if i want to keep my backuppoolname in sync with my primarypoolname is this good way to do it, running "./syncoid primarypoolname backuppoolname" like hourly?

thank you

@jimsalterjrs
Copy link
Owner

sudo zfs send -Rv pool1@snapshot_name | sudo zfs receive -Fdus pool2

You can't zfs receive directly to the root dataset of a pool, with or without syncoid, with or without encryption. The first time you replicate from source to target, you must replicate to a target which does not yet exist—only after you've established that first full replication can you incrementally replicate to that target.

i used "--force-delete" switch to previous syncoid command.

Well, I never considered somebody might try to forcibly delete an entire target pool. So there's no trap for that, and you got weird output. =)

What you want to do—back up an entire pool—is usually accomplished something like this:

syncoid -r pool1 pool2/pool1

Remember, pool2/pool1 should not exist already the first time you run that command—don't zfs create pool2/pool1 first!

@slrslr
Copy link
Author

slrslr commented Apr 11, 2021

I think that You have not understood the unlogical behavior of your script.

If i want to "mirror" pool from drive A to drive B, i would expect command like:
syncoid pool1 pool2
as mentioned here
your script suggests that the rarget exist "exists but has no snapshots matching with pool!
Replication to target would require destroying existing" and offer the --force-delete switch, which i am not afraid to use since i no longer need the pool on destination. But after using it your script behave weirdly flooding the terminal:
Removing backuppoolname because no matching snapshots were found

NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00
Removing backuppoolname because no matching snapshots were found
NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00
Removing backuppoolname because no matching snapshots were found
NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00
Removing backuppoolname because no matching snapshots were found
NEWEST SNAPSHOT: syncoid_hostname_2021-04-07:17:47:02-GMT02:00
...
...

my only aim is to clone the pool, the process should be more friendly than it is now

@0xFate
Copy link
Contributor

0xFate commented Apr 11, 2021 via email

@slrslr
Copy link
Author

slrslr commented Apr 23, 2021

syncing pool1 to pool2/dataset1.

how do You mean this? @jimsalterjrs mentioned different command:

syncoid -r pool1 pool2/pool1

i am having:
poolname
disk-id-1
disk-id-2
(2 small drives zfs "raid 0")

which i assume means i have two vdevs (or datasets) in my zpool and i want to mirror the pool to the single large backup drive.
So per what i have read, even it is not logical, i have to create kind of child pool like this:

syncoid -r pool1 pool2/pool1

(making sure child pool1 does not exist on destination)

but when i run:

syncoid -r pool pool2

it is transferring data anyway, so what it means? That after the transfer, the data will be unreadable?

@jimsalterjrs
Copy link
Owner

jimsalterjrs commented Apr 23, 2021

but when i run syncoid -r pool pool2 it is transferring data anyway, so what it means? That after the transfer, the data will be unreadable?

Any data you stored directly in pool will not be present on pool2. Only data which you placed in datasets beneath pool.

Eg:

root@box:~# mkdir /pool/slrslr 
root@box:~# cp -r /home/slrslr/Pictures /pool/slrslr/
root@box:~# zfs create pool/dataset
root@box:~# cp -r /home/slrslr/Documents /pool/dataset/
root@box:~# syncoid -r pool pool2

You will get the same syncoid errors you've been seeing, but also you will see data transferring. When the syncoid command is done, you will have pool2/dataset/Documents because it was in a child dataset of pool and replicated fine. You will not have /pool2/Pictures, because Pictures was stored in the root dataset of pool and was not replicated—since, again, you cannot replicate to an existing dataset, which means you cannot replicate to a pool root dataset (since it always exists).

Please stop opening new issues about this. The answers will not change.

@slrslr
Copy link
Author

slrslr commented Jun 8, 2021

So since i had no data in pool itself, but only in its datasets, i was using command syncoid -r pool1 pool2
But after removing some snapshots (from one of the pools), syncoid complains it can not find matching snapshot and fail to sync.
It means that i have to waste time deleting pool2 and copying all the TBs of data again, instead of syncing only tiny missing data? Or can i copy a snapshots between the pool so the syncoid can resume sync, i was unable to find how to do it. Thank you
Possibly wrong ZFS/syncoid design that it can not just generate new snapshot.

@slrslr
Copy link
Author

slrslr commented Sep 25, 2022

Today i wanted to backup pool and its datasets to my external drive which was empty, so i did
syncoid -r pool pool2

Again output:

CRITICAL ERROR: Target pool2 exists but has no snapshots matching with pool!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.

          NOTE: Target pool2 dataset is < 64MB used - did you mistakenly run
                `zfs create pool2` on the target? ZFS initial
                replication must be to a NON EXISTENT DATASET, which will
                then be CREATED BY the initial replication process.

possibly due to -r switch, else it is weird to show critical error.
yet sync is in progress, after it is complete, i hope i will be able to next run of "syncoid -r pool pool2" without issue @jimsalterjrs please?

Second question, i do not see any mention on if i can safely interrupt the sync process (Ctrl+C) and then run "syncoid -r pool pool2" again to resume without data corruption? Thank you

@phreaker0
Copy link
Collaborator

Today i wanted to backup pool and its datasets to my external drive which was empty, so i did syncoid -r pool pool2
...
possibly due to -r switch, else it is weird to show critical error. yet sync is in progress, after it is complete, i hope i will be able to next run of "syncoid -r pool pool2" without issue @jimsalterjrs please?

you need to read up on how ZFS works. You can't replicate the actual root pool dataset, they will never have a matching snapshot. In this case you would use the "--skip--parent" cli option for syncoid so only child datasets are replicated.

The error is not weird, but just states why it can't replicate the requested dataset.

Second question, i do not see any mention on if i can safely interrupt the sync process (Ctrl+C) and then run "syncoid -r pool pool2" again to resume without data corruption? Thank you

ZFS will make sure this is save. If you have a recent version of ZFS syncoid will also attempt to resume the transfer so it doesn't have to resend everything again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants