Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thin pool does not start if VM has "-" in name #4332

Closed
daniel-ayers opened this issue Sep 23, 2018 · 7 comments
Closed

Thin pool does not start if VM has "-" in name #4332

daniel-ayers opened this issue Sep 23, 2018 · 7 comments
Assignees
Labels
C: core eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL)

Comments

@daniel-ayers
Copy link

Qubes OS version:

4.0 (with current dom0 updates)

Affected component(s):

Storage (of AppVMs on Secondary Storage per instructions at https://www.qubes-os.org/doc/secondary-storage/)


Steps to reproduce the behavior:

  1. Configure a new volume group and thin pool by following the instructions at https://www.qubes-os.org/doc/secondary-storage/.

    In my case I created a pool called ssdpool in a VG called ssd over two PVs, each of which were inside a LUKS container. Result:

[me@dom0 ~]$ sudo pvs
  PV                                                    VG         Fmt  Attr PSize   PFree 
  /dev/mapper/luks-01e0c688-dbc7-4366-858d-c29342b8f9f1 qubes_dom0 lvm2 a--  475.98g     0 
  /dev/mapper/sda1_crypt                                ssd        lvm2 a--  978.08g 46.34g
  /dev/mapper/sdb1_crypt                                ssd        lvm2 a--  931.51g     0 

[me@dom0 ~]$ sudo lvs
  LV                                      VG         Attr       LSize   Pool    Origin                                  Data%  Meta%  Move Log Cpy%Sync Convert
  root                                    qubes_dom0 -wi-ao---- 411.98g                                                                                        
  swap                                    qubes_dom0 -wi-ao----  64.00g                                                                                        
  ssdpool                                 ssd        twi-aotz--   1.82t                                                 0.12   0.50                            
  vm-another-test-private                 ssd        Vwi-a-tz--   2.00g ssdpool vm-another-test-private-1537743808-back 5.22                                   
  vm-another-test-private-1537743808-back ssd        Vwi-a-tz--   2.00g ssdpool                                         0.00                                   
  vm-another-test-private-snap            ssd        Vwi-aotz--   2.00g ssdpool vm-another-test-private                 5.27                                   
  vm-another-test-volatile                ssd        Vwi-aotz--  10.00g ssdpool                                         0.02                                   
  vm-baldrick-private                     ssd        Vwi-a-tz--   2.00g ssdpool                                         0.00                                   
  vm-file-library-private                 ssd        Vwi-a-tz--  97.66g ssdpool vm-file-library-private-1537743461-back 2.15                                   
  vm-file-library-private-1537743461-back ssd        Vwi-a-tz--  97.66g ssdpool                                         2.15  
  1. Create a new VM using that pool, making sure the name of the VM includes a "-".
[me@dom0]$ qvm-create -P ssdpool --label red file-library
  1. Reboot computer.

  2. Attempt to start new VM - it will not start and message appears "Qube Status: file library Domain file-library has failed to start: volume ssd/vm-file-library-private missing"

  3. Create another VM using the new thin pool, this time ensuring the name does not contain a "-".

[me@dom0]$ qvm-create -P ssdpool --label red baldrick
  1. Reboot.

  2. Attempt to start file-library - works.

Expected behavior:

Each VM starts as requested.

Actual behavior:

I tested this with various combinations of VMs using the new thin pool: only file-library (fails); file-library and baldrick (works); file-library and another-vm (fails); file-library, another-vm and baldrick (works).

If all of the VMs using the thin pool have "-" in the VM name the pool does not work/start.

General notes:

It appears there is a bug where the thin pool is not started unless there is at least one VM using it that does not have "-" in its name. Suggests a bug parsing VM or LV names for "-", noting that "-" is used as a delimiter in the LV names.


Related issues:

@esote
Copy link

esote commented Sep 25, 2018

Source for qvm-create is here and qvm-start here, if it helps with diagnosing your issue. Don't know if this is related to the command, or how Qubes starts the VMs.

@marmarek marmarek self-assigned this Oct 29, 2018
@marmarek
Copy link
Member

I've just tried and cannot reproduce the problem. Does it still happen on your system @daniel-ayers ?

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Oct 30, 2018
If pool or group name have '-', it will be mangled as '--' in
/dev/mapper. Use /dev/VG_NAME/LV_NAME symlink instead.

Related QubesOS/qubes-issues#4332
@daniel-ayers
Copy link
Author

I take it the commit means the problem was found? I'm happy to test it again with current released updates on my system if that helps.

@marmarek
Copy link
Member

No, I haven't managed to reproduce it. Commit fixes similar issues in automated tests, but isolated to tests only (I've tried to use them too, to reproduce the problem).

@marmarek
Copy link
Member

Maybe this isn't about '-' at all? Maybe it's because secondary VG + thin pool is activated after qubesd service start? That wouldn't explain why you had different results depending on VM name, but maybe that's just a coincidence?
What happens if, in the not working state, you restart qubesd service? Does it help?

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 15, 2018
Commit 15cf593 "tests/lvm: fix checking
lvm pool existence" attempted to fix handling '-' in pool name by using
/dev/VG/LV symlink. But those are not created for thin pools. Change
back to /dev/mapper, but include '-' mangling.

Related QubesOS/qubes-issues#4332
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 15, 2018
Commit 15cf593 "tests/lvm: fix checking
lvm pool existence" attempted to fix handling '-' in pool name by using
/dev/VG/LV symlink. But those are not created for thin pools. Change
back to /dev/mapper, but include '-' mangling.

Related QubesOS/qubes-issues#4332
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 15, 2018
Commit 15cf593 "tests/lvm: fix checking
lvm pool existence" attempted to fix handling '-' in pool name by using
/dev/VG/LV symlink. But those are not created for thin pools. Change
back to /dev/mapper, but include '-' mangling.

Related QubesOS/qubes-issues#4332
@heinrich-ulbricht
Copy link

heinrich-ulbricht commented Dec 25, 2018

I've got comparable behavior and error message (Qubes 4.0, latest updates). VM on second HDD in thin pool does not start unless I start one VM on the boot HDD first. There is no dash in either of the both VM names.
@marmarek Restarting the qubesd service does indeed solve this! (sudo service qubesd restart in dom0)
Do you need any more input or is this sufficient for a fix? This is 100% reproducible.

@andrewdavidwong andrewdavidwong added the eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) label Aug 5, 2023
@github-actions
Copy link

github-actions bot commented Aug 5, 2023

This issue is being closed because:

If anyone believes that this issue should be reopened and reassigned to an active milestone, please leave a brief comment.
(For example, if a bug still affects Qubes OS 4.1, then the comment "Affects 4.1" will suffice.)

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: core eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL)
Projects
None yet
Development

No branches or pull requests

5 participants