Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

Non independent disk with vSphere Storage for Kubernetes #488

Open
mkretzer opened this issue Jun 7, 2018 · 4 comments
Open

Non independent disk with vSphere Storage for Kubernetes #488

mkretzer opened this issue Jun 7, 2018 · 4 comments

Comments

@mkretzer
Copy link

mkretzer commented Jun 7, 2018

Hello,

we need Non independent disks mapped with vSphere Storage for Kubernetes. Right now when we map a volume claim to a pod a independed disk gets created.

This means we cannot backup this disk with Veeam.

Since we create pods not on a daily basis our pods remain quite static for a long time and we can afford to loose incremental backup when re-schedulung happens.

How can we implement this? Vmware support tries to help us right now but currently they have no one who is qualified for this product (Ticket 18822315306).

Markus

@embano1
Copy link

embano1 commented Jun 8, 2018

Hi Markus,

This is a known limitation currently, see here #302. There's several approaches (workarounds) how to tackle that currently and in the future. But as of today, traditional backup solutions won't work in Kubernetes environments.

cc @tusharnt and storage engineering here.

Please also feel free to reach out to SIG VMware on Slack (https://github.com/kubernetes/community/tree/master/sig-vmware) to further discuss this.

@mkretzer
Copy link
Author

Can someone explain why non-independent disk is not even an option? We have an application which does not host that much data and where pods are not rescheduled/created often. Even if with every backup our backup soulution would have to re-read all data that would be fine for us.

@embano1
Copy link

embano1 commented Jun 17, 2018

@mkretzer Due to the way independent disks are currently handled in vSphere, all major backup providers do not support taking backups of independent disks, mainly due to missing snapshot support IIRC.

Currently you have to work around that until the in v6.5 introduced vSphere first class disk (FCD) will be fully supported by all backup providers. Workarounds could be:

  • Using central storage for the data/ shipping backup data to central storage
  • Using application replication mechanisms to protect against single pod failure
  • Using file system level replication, e.g. rsync to a backup proxy node
  • Using vSphere replication, as it is capable of backing up independent disks [1], but the workflow might need some tweaking

[1]
https://storagehub.vmware.com/t/site-recovery-manager-3/vsphere-replication-faq/

@mkretzer
Copy link
Author

That was not the question. Why is it not possible to attach the disk non-independent? That would be the best workaround. Sure, it would have downsides as well (for example CBT will not work after re-attachment) but it would be woth it!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants