Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtual image size is larger than the reported available storage #3159

Closed
g7nguym opened this issue Apr 1, 2024 · 35 comments · Fixed by #3461
Closed

Virtual image size is larger than the reported available storage #3159

g7nguym opened this issue Apr 1, 2024 · 35 comments · Fixed by #3461
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@g7nguym
Copy link

g7nguym commented Apr 1, 2024

What happened:
I tried to upload .iso image via CDI to a PVC. The iso image is nearly 5GB. It always throws an error saying the virtual image is larger than the reported available storage. I have tried multiple PVC sizes: 12Gi, 64Gi, 120Gi but it is not working.

What you expected to happen:
The iso image should be successfully uploaded to PVC

How to reproduce it (as minimally and precisely as possible):
Following command:
kubectl virt image-upload pvc win-2022-std-iso --size=120Gi --image-path=win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.49.172.185:31876 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind
PVC default/win-2022-std-iso not found
PersistentVolumeClaim default/win-2022-std-iso created
Waiting for PVC win-2022-std-iso upload pod to be ready...
Pod now ready
Uploading data to https://10.49.172.185:31876

4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m13s

unexpected return value 400, Saving stream failed: Virtual image size 12886302720 is larger than the reported available storage 12884901888. A larger PVC is required.

Additional context:
Add any other context about the problem here.

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml): 1.58.3
  • Kubernetes version (use kubectl version): 1.29.0
  • DV specification: N/A
  • Cloud provider or hardware configuration: kind
  • OS (e.g. from /etc/os-release): Rocky Linux 9.3
  • Kernel (e.g. uname -a): 5.14.0-362.24.1.el9_3.x86_64
  • Install tools: kubectl virt
  • Others: N/A
@akalenyu
Copy link
Collaborator

akalenyu commented Apr 1, 2024

Definitely weird - the error indicates that only 12Gi is available in the volume (even though you asked for 120)
Is it possible that the storage provisioner (--storage-class=datavg-thin-pool) is providing a volume smaller than the request?
(meaning, there is no 120Gi available in the pool, but the volume creation still goes through)

You could test this by creating a 120Gi PVC and a pod mounting it;
Then run something like

bash-5.1# stat /pvcmountpath/ -fc %a
907
bash-5.1# stat /pvcmountpath/ -fc %f
974

(%a - available (what we care about), %f - total free)
To get total size you would multiply by the block size (stat /pvcmountpath/ -fc %s)

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

Hi @akalenyu
Thanks for taking time looking into my issue.
I have tested by creating a PVC of 128Gi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
slmitswinp1-pvc Bound pvc-36655653-b079-4872-b20f-e64bf8a5ae50 128Gi RWX datavg-thin-pool 26m
I then mounted it into a pod:
[root@k8s-master-01 ~]# kubectl exec pods/nginx -- stat /var/www/html/ -fc %s
4096
[root@k8s-master-01 ~]# kubectl exec pods/nginx -- stat /var/www/html/ -fc %a
31068944
[root@k8s-master-01 ~]# kubectl exec pods/nginx -- stat /var/www/html/ -fc %f
32750812

It looks like the size of my pvc is only 4096 bytes?
Strange thing is that when I list the content of the disk, there is already a disk.img file?
[root@k8s-master-01 ~]# kubectl exec pods/nginx -- ls -larth /var/www/html
total 28K
drwx------. 2 root root 16K Apr 1 11:19 lost+found
drwxr-xr-x. 3 root root 4.0K Apr 1 11:20 .
-rw-r--r--. 1 107 107 119G Apr 1 11:24 disk.img
drwxr-xr-x. 3 root root 18 Apr 1 11:32 ..

@akalenyu
Copy link
Collaborator

akalenyu commented Apr 1, 2024

Actually, the size of your PVC is around 120Gi - 31068944 * 4096 (available blocks * block size)
It is definitely weird that a disk.img is already existing there, unless you tried the upload before creating the nginx pod

@alromeros
Copy link
Collaborator

Maybe this case is similar to the one described here https://issues.redhat.com/browse/CNV-36769? If a first upload attempt failed for some unrelated reason (maybe the upload pod was force-deleted), then in subsequent retries the original img.disk will be there occupying space and preventing the upload from succeeding, as @akalenyu suggested.

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

I made some progress with this issue by using --volume-mode=FileSystem. This option works.
If the PVC is block volume, it doesn't work. I don't really have an explaination for this. Even if I delete the block PVC completely and create a new one, it always shows larger size PVC needed message.

@akalenyu
Copy link
Collaborator

akalenyu commented Apr 1, 2024

I made some progress with this issue by using --volume-mode=FileSystem. This option works. If the PVC is block volume, it doesn't work. I don't really have an explaination for this. Even if I delete the block PVC completely and create a new one, it always shows larger size PVC needed message.

ah okay if this is block, you could repeat the nginx experiment but instead use blockdev --getsize64 /dev/pvcdevicepath

@awels
Copy link
Member

awels commented Apr 1, 2024

So I am wondering, which version of virtctl plugin do you have. I see you are creating a PVC with kubectl virt image-upload pvc win-2022-std-iso --size=120Gi --image-path=win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.49.172.185:31876/ --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind instead of a dv which should generate a message about not using datavolumes IIRC, and I don't see that

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

Hi @awels
Below is the version info. I installed virt using krew
[root@k8s-master-01 ~]# kubectl virt version
Client Version: version.Info{GitVersion:"v1.2.0", GitCommit:"f26e45d99ac35743fc33d6a121b629e9a9af6b63", GitTreeState:"clean", BuildDate:"2024-03-05T20:34:24Z", GoVersion:"go1.21.5 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v1.2.0", GitCommit:"f26e45d99ac35743fc33d6a121b629e9a9af6b63", GitTreeState:"clean", BuildDate:"2024-03-05T21:32:21Z", GoVersion:"go1.21.5 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

I tried another upload with virtio-win.iso which is about 600MB. Both options block or FileSystem PVC work.
kubectl virt image-upload pvc virtio-win-iso-test --size=4Gi --image-path=/root/virtio-win.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.248.83.131:443 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind
PVC default/virtio-win-iso-test not found
PersistentVolumeClaim default/virtio-win-iso-test created
Waiting for PVC virtio-win-iso-test upload pod to be ready...
Pod now ready
Uploading data to https://10.248.83.131:443

598.45 MiB / 598.45 MiB [========================================================================================================================================================] 100.00% 16s

Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
Processing completed successfully
Uploading /root/virtio-win.iso completed successfully

@awels
Copy link
Member

awels commented Apr 1, 2024

Interesting can you try Alex's suggestion of using blockdev to see the size of the device properly, in both the smaller case and larger case, maybe there is some overhead we are not aware of?

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

Interestingly enough. Now it works with windows iso as well which didn't work before. I can not reproduce the issue now unfortunately.
The storage backend I am using is Linstor.
kubectl virt image-upload pvc nginx-test-iso --size=64Gi --image-path=/root/win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.248.83.131:443 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind
PVC default/nginx-test-iso not found
PersistentVolumeClaim default/nginx-test-iso created
Waiting for PVC nginx-test-iso upload pod to be ready...
Pod now ready
Uploading data to https://10.248.83.131:443

4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m11s

Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
Processing completed successfully
Uploading /root/win22-std.iso completed successfully

@akalenyu
Copy link
Collaborator

akalenyu commented Apr 1, 2024

Hmm maybe sometimes the linstor csi driver messes up the size calc? iirc you were doing --size=120Gi before, now you're using 64

@g7nguym
Copy link
Author

g7nguym commented Apr 1, 2024

I've been trying multiple sizes before, 40Gi, 64Gi, 120Gi. None of it worked. I will try out Portworx at a later time to see if it is more stable.
Thank you for all of your help in this issue.

@aglitke
Copy link
Member

aglitke commented Apr 8, 2024

@kvaps Do you have any insight as to what might be happening with LinStore here?

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 7, 2024
@akalenyu
Copy link
Collaborator

I've been trying multiple sizes before, 40Gi, 64Gi, 120Gi. None of it worked. I will try out Portworx at a later time to see if it is more stable. Thank you for all of your help in this issue.

Hey, did you get a chance to try this with a different provisioner?

@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 14, 2024
@kubevirt-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link
Contributor

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kvaps
Copy link
Member

kvaps commented Oct 10, 2024

/reopen

The issue still persists

virtctl image-upload dv win10-iso --size=8Gi --image-path ~/Downloads/Win10_21H2_EnglishInternational_x64.iso --storage-class=replicated --uploadproxy-url=https://localhost:8443 --insecure
PVC tenant-kvaps/win10-iso not found
DataVolume tenant-kvaps/win10-iso created
Waiting for PVC win10-iso upload pod to be ready...
Pod now ready
Uploading data to https://localhost:8443

 5.48 GiB / 5.48 GiB [=========================================================================================================================================================] 100.00% 8m56s

You are using a client virtctl version that is different from the KubeVirt version running in the cluster
Client Version: v1.1.0
Server Version: v1.3.1
unexpected return value 400, Saving stream failed: virtual image size 8589942784 is larger than the reported available storage 8589934592. A larger PVC is required

@kubevirt-bot kubevirt-bot reopened this Oct 10, 2024
@kubevirt-bot
Copy link
Contributor

@kvaps: Reopened this issue.

In response to this:

/reopen

The issue still persists

virtctl image-upload dv win10-iso --size=8Gi --image-path ~/Downloads/Win10_21H2_EnglishInternational_x64.iso --storage-class=replicated --uploadproxy-url=https://localhost:8443 --insecure
PVC tenant-kvaps/win10-iso not found
DataVolume tenant-kvaps/win10-iso created
Waiting for PVC win10-iso upload pod to be ready...
Pod now ready
Uploading data to https://localhost:8443

5.48 GiB / 5.48 GiB [=========================================================================================================================================================] 100.00% 8m56s

You are using a client virtctl version that is different from the KubeVirt version running in the cluster
Client Version: v1.1.0
Server Version: v1.3.1
unexpected return value 400, Saving stream failed: virtual image size 8589942784 is larger than the reported available storage 8589934592. A larger PVC is required

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kvaps
Copy link
Member

kvaps commented Oct 10, 2024

example error for --size=1Gi

unexpected return value 400, Saving stream failed: virtual image size 1073750016 is larger than the reported available storage 1073741824. A larger PVC is required

Inside the container I can see:

/ # lsblk -bno SIZE /dev/drbd1014
1073750016
/ # qemu-img info /dev/drbd1014
image: /dev/drbd1014
file format: raw
virtual size: 1 GiB (1073750016 bytes)
disk size: 0 B
Child node '/file':
    filename: /dev/drbd1014
    protocol type: host_device
    file length: 1 GiB (1073750016 bytes)
    disk size: 0 B

1073750016 bytes - It is even more than 1Gi

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

@awels @aglitke any Idea where this reported VirtualSize of 1073741824 comes from?

From the code I was pretty sure that it uses qemu-img info but it reports 1073750016 not 1073741824

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

I just reported this issue to LINSTOR: LINBIT/linstor-server#421

But does this check really makes sense? When I create a new volume I expect that it will be larger anyway.

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

Maybe should we add blockOverhead option, that works the same way as filesystemOverhead but for block devices?

@awels
Copy link
Member

awels commented Oct 11, 2024

The main reason we have the fsOverhead is because filesystems themselves take up space on the block device. For a block device we don't have this overhead. The way we get the available space on block device size is /usr/sbin/blockdev --getsize64 <device path> The function is here https://github.com/kubevirt/containerized-data-importer/blob/main/pkg/util/file.go#L121-L142 and then we compare that to the information we got from qemu-img.

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

@awels it seems that there is a bug, we should check if actual device size is smaller than reported, but not larger.

Because smaller image can be placed on larger drive but not vise-versa

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

In my case blockdev reports the same size as qemu-img:

/ # qemu-img info /dev/drbd1000
image: /dev/drbd1000
file format: raw
virtual size: 1 GiB (1073750016 bytes)
disk size: 0 B
Child node '/file':
    filename: /dev/drbd1000
    protocol type: host_device
    file length: 1 GiB (1073750016 bytes)
    disk size: 0 B
/ # blockdev --getsize64 /dev/drbd1000
1073750016

And this size is larger than 1Gi requested in PVC (1073741824 bytes)

@kvaps
Copy link
Member

kvaps commented Oct 11, 2024

It's up to consideration #3458

@kvaps
Copy link
Member

kvaps commented Oct 12, 2024

@awels LINSTOR creates block device a bit larger size than requested in PVC. I think this is an issue for CDI.

If I understand this correctly there is no easy way for preparing a volume with exactly requested size using LINSTOR and DRBD.

@ghernadi please correct me if I'm wrong about this statement ^^

So for now I don't understand on which side this issue should be fixed? CDI or LINSTOR?

@phoracek
Copy link
Member

phoracek commented Oct 13, 2024

Hello. I'm getting the same issue on K3s with rancher.io/local-path provisioner.

No matter how much I increase the requested storage, the import always fails with "irtual image size 34359738368 is larger than the reported available storage 32804339539. A larger PVC is required".

Nevermind, I just haven't increased it enough.

@ghernadi
Copy link

@awels LINSTOR creates block device a bit larger size than requested in PVC. I think this is an issue for CDI.

If I understand this correctly there is no easy way for preparing a volume with exactly requested size using LINSTOR and DRBD.

@ghernadi please correct me if I'm wrong about this statement ^^

So for now I don't understand on which side this issue should be fixed? CDI or LINSTOR?

This reminds me of a similar issue we had with our Proxmox plugin, where Proxmox also wants the exact same size when moving an existing Proxmox volume into a DRBD resource.
For this reason LINSTOR introduced the property DrbdOptions/ExactSize. When this property is set to True (default is False), DRBD's .res file gets the size ${size}; property set, which causes the resulting DRBD device to be exactly as large as the user requested (i.e. LINSTOR's volume-definition size) even if the backing device is a bit larger (due to rounding up to the next extent size for example).

Feel free to use this property for such migrations volumes, but please make sure to DELETE THIS PROPERTY once the migration is finished.
It is not recommended to have this property active during production . This property is only meant to be set during the migration.

We had already tried in the past to automatically manage DRBD's size property, but we always run into issues. There are quite some scenarios that are more complicated internally than one would think, making the usage of DRBD's size property quite tricky (especially during resize events, (partially) failing resizes, etc... ).


Alternative solution: Tell LINSTOR to configure DRBD with external metadata. This can of course also stay in production.

This alternative solution works simply because if the user requests a 1GB volume, LINSTOR will create 2 devices. Assuming LVM with default settings as a backing storage, one of the devices will have exactly 1 GB (which will also be reported by DRBD and LINSTOR as "usable size"), and a 4MB device, from which DRBD only uses a few KiB for its own metadata. So the "a bit more space" (i.e. the unused 3.something MiB) is now "trapped" in the external metadata device and therefore cannot be part of the usable space.

@kvaps
Copy link
Member

kvaps commented Oct 14, 2024

@ghernadi many thanks for such detailed comment and sharing your experience 🙏

It's very interesting to learn about this opportunity. But I think we can't teach CDI for adding DRBD specific options as it works purely with Kubernetes PVCs.

Besides, I am sure that this issue also affects other storage providers that base their logic on LVM and ZFS (eg. topolvm and democratic-csi)

I am proposing a fix to CDI to take requestImageSize into account only for filesystem volumes. Here is a PR: #3461

@awels could you share your thoughts on this, please?

@awels
Copy link
Member

awels commented Oct 14, 2024

Okay, so I think I see what is going on, lets take the 1Gi example and follow the flow of the calculateTargetSize() function.
getAvailableSpaceBlockFunc returns the value from blockdev --getsize64 which is 1073750016 So it then sets targetQuantity to that value. So far so good. Then we see dp.requestImageSize != "". It is in fact 1073741824 (1Gi). So then minQuantity comes out to 1Gi.

The reason we do this check is for 'shared' filesystem storage like hpp or nfs-csi, where the reported space is the entire disk, and we don't want to use it all. We want to use exactly what was requested, so we pick the minimum of the two. Aka we discovered the block devics > request size, so we picked the requested size as the 'available' space.

Now this value is stored as the available space for the virtual disk. Now in the validation routine, we run qemu-img info <device> which returns 1073750016 bytes as the virtual size. This is larger than the reported available size, and thus the validation fails.

So I think the most correct fix is to move code that makes the 'available' space the minimum inside the else that is associated with filesystem volumes. This way the targetSize returned is the value found from the call to blockdev.

@kvaps
Copy link
Member

kvaps commented Oct 14, 2024

Yeah my bad, sorry, I just reabased the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
9 participants