Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qubes backup restore fails #5393

Closed
lattice0 opened this issue Oct 16, 2019 · 7 comments
Closed

qubes backup restore fails #5393

lattice0 opened this issue Oct 16, 2019 · 7 comments
Labels
C: core eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) P: major Priority: major. Between "default" and "critical" in severity.

Comments

@lattice0
Copy link

Qubes OS version
4.0.1

Affected component(s) or functionality
Qubes Backup Restore

Brief summary
When I restore the backup I get

Error: unable to extract files for /var/tmp/restoregsvtnc34/vm10/private.img.044.tar output: gzip: stdin: unexpected end of fike

Error extracting data: fialed to decrypt /var/tmp/restoregsvtnc34/vm10/private.img.045.enc: b'scrypt input is not valid scrypt-encrypted block\n

done

and then "finished succesfully" popup window.

At first I tried it was a problem with my backup, but it's the second time it happens in two different backups.

My two VMs got restored but only the small one worked, the other gives:

Domain: my_vm has failed to start: qrexec-daemon startup failed: Connection to the VM failed

To Reproduce
Steps to reproduce the behavior:
_1. do backup
_2. reinstall qubes
_3. restore
_4. See error

Expected behavior
backups restored without errors and launching

The VM that got problem is a standalone based on template, I did not restore its template but since its standalone I don't think I need. The first time I also restored all VMs including the templates and I got the same problem

I also noticed that this VM before restoring had 60gb iin system and ike 80 in private storage, but in the restore it had 100gb in system and 2gb in restore

@lattice0 lattice0 added P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug labels Oct 16, 2019
@andrewdavidwong andrewdavidwong added this to the Release 4.0 updates milestone Oct 16, 2019
@lattice0
Copy link
Author

I just did another backup and verified it. I got "verified with errors" error

@a-barinov
Copy link

After facing the same problem and doing quite a bit of research I found the reason for that. If I do backup to a fast medium (e.g. ssd) things work ok. If I do backup to a slow medium (usb 2 hdd or over network) then my backup size become smaller (can be as bad as 25Gb turning into 4Gb over very slow network connection) and large vm get affected (not restoring from that backup) while small vms are ok.

I tried changing qubes.Backup replacing 'cat > $TARGET' with 'dd obs=10240000 of=$TARGET' and things become better but 20Gb+ vms are still not backed up properly. So the problem is linked to qubes (xen?) vm interconnect mechanism.

As a side note, obs=10240000 crashes dom0 unless it has 2Gb of memory allocated (I usually run it with 768Mb) which is surprising as we are talking about 10Mb buffer. Sort of confirms this is an inter-vm issue.

@a-barinov
Copy link

Looking further into this I think I found the simplest solution: 'qubes-backup' shall allow piping backup to dom0 script (currently not allowed in backup app as this is not supported by admin.Backup.Execute). This would allow to pipe backup to 'pv' command that limits the throughput rate and then pipes it to 'qubes.Backup' in a target vm. Currently I need to backup to dom0 and then pipe to pv manually.

@andrewdavidwong andrewdavidwong added the needs diagnosis Requires technical diagnosis from developer. Replace with "diagnosed" or remove if otherwise closed. label Jun 30, 2020
@lattice0
Copy link
Author

lattice0 commented Jul 2, 2020

@a-barinov thanks, this is the only thing that is affecting my qubes usage right know. Would be nice to have reliable backups

@akkuladezeit
Copy link

akkuladezeit commented Feb 5, 2022

Bug is still present. I cant restore in qubes 4.1 after moving from 4.0
There will now effect many users if they upgrade form 4.0 to 4.1 with a fresh installation!

Is there no plausibility Check After Backup creation? I created some Backups and the estimated size Never fitted to the finale size (Tested without compression).
Example estimated size 150gb and Finale size was 89gb. There Must Something realy going wrong.

@andrewdavidwong andrewdavidwong added P: major Priority: major. Between "default" and "critical" in severity. and removed P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. labels Feb 5, 2022
@keepiru
Copy link

keepiru commented Feb 13, 2022

I encountered this as well when migrating from 4.0 to 4.1, both when using the GUI or CLI. The rest of my qubes restored successfully after I excluded the broken one. Trimmed log of restoring the one that didn't work:

kai@dom0:~/notes$ time qvm-backup-restore --verbose --skip-dom0-home -d sys-usb -p ~/pw /media/user/backups-enc/qubes-backup-2022-02-09T124331 sleepyhead
[ ... ]
2022-02-12 16:04:24,683 [MainProcess restore._restore_vm_data:1443] qubesadmin.backup: Getting new file: vm24/private.img.096.enc
2022-02-12 16:04:24,685 [ExtractWorker3-2 restore.run:621] qubesadmin.backup.extract: Extracting file /var/tmp/restorez1je99ic/vm24/private.img.095
2022-02-12 16:04:24,685 [ExtractWorker3-2 restore.run:733] qubesadmin.backup.extract: Releasing next chunk
2022-02-12 16:04:26,051 [ExtractWorker3-2 restore.run:754] qubesadmin.backup.extract: Removing file /var/tmp/restorez1je99ic/vm24/private.img.095
2022-02-12 16:04:29,793 [MainProcess restore._restore_vm_data:1443] qubesadmin.backup: Getting new file: vm24/private.img.097.enc
2022-02-12 16:04:29,797 [ExtractWorker3-2 restore.run:621] qubesadmin.backup.extract: Extracting file /var/tmp/restorez1je99ic/vm24/private.img.096
2022-02-12 16:04:29,797 [ExtractWorker3-2 restore.run:733] qubesadmin.backup.extract: Releasing next chunk
2022-02-12 16:04:31,369 [ExtractWorker3-2 restore.run:754] qubesadmin.backup.extract: Removing file /var/tmp/restorez1je99ic/vm24/private.img.096
2022-02-12 16:04:36,837 [ExtractWorker3-2 restore.collect_tar_output:434] qubesadmin.backup.extract: tar2_stderr:
2022-02-12 16:04:36,838 [ExtractWorker3-2 restore.cleanup_tar2:498] qubesadmin.backup.extract: ERROR: unable to extract files for /var/tmp/restorez1je99ic/vm24/private.img.096, tar output:

gzip: stdin: unexpected end of file

2022-02-12 16:04:36,838 [ExtractWorker3-2 restore.run:766] qubesadmin.backup.extract: Finished extracting thread
2022-02-12 16:04:36,840 [MainProcess restore.restore_do:1968] qubesadmin.backup: Error extracting data: failed to decrypt /var/tmp/restorez1je99ic/vm24/private.img.097.enc: b'scrypt: Input is not valid scrypt-encrypted block\n'
2022-02-12 16:04:36,842 [MainProcess restore.restore_do:1977] qubesadmin.backup: -> Done.
2022-02-12 16:04:36,842 [MainProcess restore.restore_do:1979] qubesadmin.backup: -> Please install updates for all the restored templates.

@andrewdavidwong andrewdavidwong added the eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) label Aug 5, 2023
@github-actions
Copy link

github-actions bot commented Aug 6, 2023

This issue is being closed because:

If anyone believes that this issue should be reopened and reassigned to an active milestone, please leave a brief comment.
(For example, if a bug still affects Qubes OS 4.1, then the comment "Affects 4.1" will suffice.)

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 6, 2023
@andrewdavidwong andrewdavidwong removed the needs diagnosis Requires technical diagnosis from developer. Replace with "diagnosed" or remove if otherwise closed. label Aug 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: core eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) P: major Priority: major. Between "default" and "critical" in severity.
Projects
None yet
Development

No branches or pull requests

5 participants