VM Disk performance issue #10468
Unanswered
akrasnov-drv
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Hello, I have never noticed issues using linked-clones and I suspect the culprit may be something else, perhaps environmental, otherwise we would have had people complain. Linked-clones has always been the default. To answer your question, you cannot limit the number of images per shared image (ie how many VMs you can spawn from a given template). It'd be worth doing some in-depth monitoring of the hypervisor, especially I/O metrics and see what is going on when you feel like it's slow etc. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
We see degraded performance in our VMs in CloudStack (4.20.0.0) during compilation and other heavy I/O operations.
I set our compute offerings to use local disk, nevertheless when there are 20-40 VMs per host, it does not work well.
As I understand it happens because of shared base disk architecture used by CloudStack with QCOW images.
According to information I found for QCOW:
Case-Based Limits:
• Light workloads: 10-20 images sharing one base can work fine.
• Heavy workloads (e.g., database, high IOPS): Keep it below 5 per base.
• Cloud setups (OpenStack, CloudStack): Optimized storage backends (Ceph, LVM) are preferable.
Please advise if
Thanks,
Alex.
Beta Was this translation helpful? Give feedback.
All reactions