-
Notifications
You must be signed in to change notification settings - Fork 933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage: Add support for online disk growing of zfs
and lvm
block volumes (from Incus)
#14211
Storage: Add support for online disk growing of zfs
and lvm
block volumes (from Incus)
#14211
Conversation
c32cf54
to
09b040b
Compare
5141575
to
4646696
Compare
4646696
to
39f4b7a
Compare
Heads up @mionaalex - the "Documentation" label was applied to this issue. |
bef2c1b
to
eb1b3a6
Compare
eb1b3a6
to
41533d9
Compare
What would prevent growing live the |
I'd like it if we could explore adding suppport for that, we support growing the raw disk file offline, so not sure if there is a reason we cant do it online? |
Needs a rebase too please |
41533d9
to
877ed7d
Compare
@simondeziel @tomponline re: online disk resize I don't see an issue with adding online disk resizing for ceph. RBD has an exclusive lock feature and supports online resizing with RBD client kernel > 3.10. |
Thanks for checking on |
877ed7d
to
b3f97a5
Compare
@tomponline rebased and good to go. Do we want to include support for live resizing ceph disks with this PR or open up a separate issue and save it for later? |
Lets try and do it as part of this PR. And then we can add a single API extension. |
257be7c
to
86b38f5
Compare
I've tested live resizing a Ceph RBD filesystem disk and it works as expected - it's just online resizing of Ceph RBD block volumes that doesn't work, which explains why I haven't been able to resize a Ceph backed rootfs. |
It doesn't look like we'll be able to add support for online growing of Ceph RBD root disks. Ceph backed VM's have a read only snapshot which can't be updated when the root disk size is updated (see below). The snapshot is used for instance creation. lxd/lxd/storage/drivers/driver_ceph_volumes.go Lines 1332 to 1337 in 9ac2433
Furthermore, online resizing for Ceph volumes is generally considered unsafe in LXD: lxd/lxd/storage/drivers/driver_ceph_volumes.go Lines 192 to 205 in 9ac2433
|
86b38f5
to
512f8d6
Compare
Rebased and good to go. In summary, we're adding support for online resizing (growing) of any zfs or lvm disks. Online resizing Ceph RBD filesystems was possible before the changes in this PR, but we've confirmed that online resizing of Ceph RBD block volumes is not possible due to the read only snapshot used during instance creation. |
50c1dcd
to
e5e9a6e
Compare
Thanks for digging into this further :) Given my initial research, your new findings, and what I've seen in the LXD codebase, I believe it is theoretically possible to online resize (grow) Ceph RBD block volumes, dir and .raw files. I think I have some more work to do for this PR. |
I don't think flattening the cloned image is a safe approach. From the docs:
|
So although it is possible to online grow a Ceph RBD backed root disk, I found another problem: When we create a Ceph RBD volume, a read only snapshot is created. This read only snapshot is used as the clone source for future non-image volumes. The read only or protected property of the snapshot is a precondition for creating RBD clones. |
That's initial image turned into a cloned read only snapshot really maps to my understanding of how it works with ZFS. Still not clear why/what's different with Ceph RBD volumes :/ |
For reference, here is the error I'm getting after modifying the behaviour to allow for online growing the root disk, and adding a file system resize: root@testbox:~# lxc config device set v1 root size=11GiB
Error: Failed to update device "root": Could not grow underlying "ext4" filesystem for "/dev/rbd0": Failed to run: resize2fs /dev/rbd0: exit status 1 (resize2fs 1.47.0 (5-Feb-2023)
resize2fs: Bad magic number in super-block while trying to open /dev/rbd0) |
Same for |
e57a3d1
to
319bf37
Compare
I don't mind (too much) having this feature land in a per-driver fashion. However, I suspect/hope that Ceph is the special case here and all our other drivers would support live growing. I didn't hear back from you regarding the easy to test |
…ple servers Signed-off-by: Stéphane Graber <stgraber@stgraber.org> (cherry picked from commit 73a78c2f0cc188c602c88be8cfdc9bfcfb9df0ab) Signed-off-by: Kadin Sayani <kadin.sayani@canonical.com> License: Apache-2.0
…esize Signed-off-by: Stéphane Graber <stgraber@stgraber.org> (cherry picked from commit 81f9c4b915830322871bb49d6f04f3009f63d01a) Signed-off-by: Kadin Sayani <kadin.sayani@canonical.com> License: Apache-2.0
Signed-off-by: Kadin Sayani <kadin.sayani@canonical.com>
Signed-off-by: Kadin Sayani <kadin.sayani@canonical.com>
319bf37
to
4e0e1c0
Compare
@tomponline mentioned that Powerflex is out of scope for this PR. |
@kadinsayani From what I can see this may also help with container live resizing (for both growing and shrinking) on block based drivers (i.e. lvm, ceph and zfs with |
Online shrinking is only possible for |
zfs
and lvm
block volumes (from Incus)
@kadinsayani can we close this for now until you get chance to look at this again? |
This PR adds support for resizing (growing) VM disks without rebooting, when using ZFS or LVM storage backends.
Resolves #13311.