-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume resizing is stuck for striped LVM thick volumes #430
Comments
Hello, thanks for the report. I think the issue is the following: LINSTOR usually rounds up the requested size to the next integer multiple of LVM's extent size (which is 4M by default). In other words if you request a 10M volume, LINSTOR rounds it up to 12M. If LINSTOR would not do it, LVM would do the rounding. The bug here is that this rounding also needs to take the stripe-count into account (which LINSTOR currently does not do, that's the bug). So what happens with your 4 stripes is that a 10M volume would not be rounded up to 3x 4M, but rather to 4x 4M, or similar a 17M would not be rounded to 20M (5x 4M) but to 32M (8x 4M) and so on.. I will try to fix this in the LINSTOR code, which could be a bit annoying since there are a few cases LINSTOR needs to take into account (i.e. existing volumes and volumes to be created have different sources of stripe-count, etc...). If I am correct you were just lucky when testing with 2 stripes. ZFS is a complete different story so I believe we can safely ignore it for this issue. Things also get even more complicated since if you request a volume with size X, LINSTOR has to add X + something to give DRBD enough space for its metadata, and this X + something goes than to the LVM layer to get rounded up, etc... However, what you can try to get your resources out of resizing state is to retry the resizing with gradually increasing size. So for example if your resource is stuck in a resize to 2G (which I assume from the given ErrorReport) this 2048M gets extended by LINSTOR to 2052M (due to DRBD's internal metadata I assume), whereas LVM then rounds it further up to 2064M (2052M would be 513 extents, but that also needs to be a multiple of 4 (stripes) so we end up with 516 extents, which is 2064M). Let me know if that helps. |
ghernadi, thanks a lot for detailed explanation of this issue! Yes, with 2 stripes everything is working, with 3 and more - no. Proposed manual way (linstor vd size), how to repair volumes in "Resizing" state is working for me, thanks |
Linstor v1.29.2
LVM thick backend, volumes are striping across 4 (or more) PV in lvm
Any resize operations stuck, volume status in linstor is resizing
In debug logs, I can see finished phase "[LVM] resized", but next phase of drbd resizing never happens.
Linstor is trying to resize again already resized lvm volume and raise error.
If volume is not striped or striped across 2 PV in lvm - everything is working.
As well as for LVM thin backend with striping or ZFS raid0 pool.
You can find debug logs of satellite and error report in the attachment.
linstor_volume_resizing.log
linstor_volume_resizing_error_report.txt
The text was updated successfully, but these errors were encountered: