You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi here.
Iam trying to restore previously created backup using gs backend.
Backup process is pretty easy and successful, but restore part fails with strange errors.
Here is debug level output for zfsbackup receive:
2023/01/17 14:27:42 Setting number of cores to: 2
2023/01/17 14:27:42 Loaded private key ring
2023/01/17 14:27:42 Loaded public key ring
2023/01/17 14:27:42 Setting working directory to /root/.zfsbackup
2023/01/17 14:27:42 PGP Debug Info:
Loaded Private Keys:
Loaded Public Keys:
2023/01/17 14:27:42 Limiting the number of active files to 5
2023/01/17 14:27:42 Initializing Backend gs://my_pool_snapshots
2023/01/17 14:27:43 Calculating how to restore to zfs-auto-snap_daily-2023-01-09-0625.
2023/01/17 14:27:43 Getting ZFS Snapshots with command "zfs list -H -d 1 -p -t snapshot,bookmark -r -o name,creation,type -S creation my_pool"
2023/01/17 14:27:43 Adding backup job for zfs-auto-snap_daily-2023-01-09-0625 to the restore list.
2023/01/17 14:27:43 Need to restore 1 snapshots.
2023/01/17 14:27:43 Restoring snapshot zfs-auto-snap_daily-2023-01-09-0625 (1/1)
2023/01/17 14:27:43 Initializing Backend gs://my_pool_snapshots
2023/01/17 14:27:43 Enabling the full path (-d) flag on the receive.
2023/01/17 14:27:43 Enabling the forced rollback (-F) flag on the receive.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:43 Starting zfs receive command: zfs receive -d -F my_pool
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol2.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol3.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol4.
2023/01/17 14:27:43 Downloading volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol5.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol2.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol5.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol4.
2023/01/17 14:27:55 Downloaded my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:55 Processing my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1.
2023/01/17 14:27:56 Error while trying to read from volume my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol1 - io: read/write on closed pipe
2023/01/17 14:27:56 Error waiting for zfs command to finish - signal: aborted (core dumped): internal error: Unknown error 1037
2023/01/17 14:27:56 Could not kill zfs send command due to error - os: process already finished
2023/01/17 14:27:56 Could not download file my_pool|zfs-auto-snap_daily-2023-01-09-0625.zstream.gz.vol3 to the local cache dir due to error - context canceled.
2023/01/17 14:27:56 There was an error during the restore process, aborting: signal: aborted (core dumped)
2023/01/17 14:27:56 Failed to restore snapshot.
zfsbackup version output:
Program Name: zfsbackup
Version: v0.4
OS Target: linux
Arch Target: amd64
Compiled With: gc
Go Version: go1.14.2
zfs version output:
zfs-0.8.3-1ubuntu12.14
zfs-kmod-2.0.2-1ubuntu5
uname -a output: Linux carbonite-node 5.11.0-1017-gcp #19~20.04.1-Ubuntu SMP Thu Aug 12 05:25:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Any suggestion to bypass/debug/fix that?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
foblas
changed the title
Rrestore process failed with io: read/write on closed pipe
Restore process failed with io: read/write on closed pipe
Jan 17, 2023
Hi here.
Iam trying to restore previously created backup using gs backend.
Backup process is pretty easy and successful, but restore part fails with strange errors.
Here is debug level output for zfsbackup receive:
zfsbackup receive --auto -d -F my_pool gs://my_pool_snapshots my_pool --logLevel debug
zfsbackup version output:
zfs version output:
uname -a output:
Linux carbonite-node 5.11.0-1017-gcp #19~20.04.1-Ubuntu SMP Thu Aug 12 05:25:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Any suggestion to bypass/debug/fix that?
Thanks in advance!
The text was updated successfully, but these errors were encountered: