Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offload CSI snapshots to S3 without privileged access / hostpath #7188

Closed
dmaggo opened this issue Dec 7, 2023 · 15 comments
Closed

Offload CSI snapshots to S3 without privileged access / hostpath #7188

dmaggo opened this issue Dec 7, 2023 · 15 comments
Assignees

Comments

@dmaggo
Copy link

dmaggo commented Dec 7, 2023

Describe the problem/challenge you have

Offload CSI snapshots to another back up location such as S3 requires privileged activation and access to hostpath. See https://velero.io/docs/main/csi-snapshot-data-movement/

Describe the solution you'd like

Wouldn't it be possible in cases where the storage provider supports native snapshot (e.g. vsphere csi driver), that Velero does not need to access the volume over a hostpath?
As e.g. like Portworx do it https://docs.portworx.com/portworx-backup-on-prem/use-px-backup/backup-restore/create-backup/backup-csi-snapshots#offload-csi-snapshots-to-back-up-location

Or am I missing something and there are reasons why Velero cannot go this way?

Thanks

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "The project would be better with this feature added"
  • 👎 for "This feature will not enhance the project in a meaningful way"
@draghuram
Copy link
Contributor

CloudCasa also offloads snapshots to S3 by directly creating a PVC from the snapshot and mounting it in a mover Pod. This way, you don't need privileged access. I think Velero does create a temporary Pod but actual backup still happens from the node agent Pod which requires privileged access. I believe this allows existing file system code to be used but I will let maintainers to chime in.

@dmaggo
Copy link
Author

dmaggo commented Dec 8, 2023

In our company, enabling privileged access in our TKGi infrastructure violates our compliant specifications, so I cannot use the Datamover. Unfortunately, this leaves us with the copied (backup) PVs in the vsphere CNS and you cannot create as many of these copies as you want.

@Lyndon-Li
Copy link
Contributor

@dmaggo If you are a TKG user, please contact TKG supports for helps, they will give more and better help.

@dmaggo
Copy link
Author

dmaggo commented Dec 8, 2023

We have no support for Velero because Velero is not part of TKGi. We use it as an independent product.

@Lyndon-Li
Copy link
Contributor

So for your case, even host path access without privileged mode (if it is available) is not allowed, right?

@dmaggo
Copy link
Author

dmaggo commented Dec 8, 2023

That wouldn't be nice either, but we could still argue for it internally. So at least we wouldn't have to enable "allow privileged" for all TKGi plans, which affects ALL k8s clusters. However, it would be best if Velero could use the native snapshot feature directly for data moving.

@Lyndon-Li
Copy link
Contributor

For native snapshot, I think it is already there --- velero + vsphere-plugin, though vsphere-plugin is out of the scope of velero project.

@dmaggo
Copy link
Author

dmaggo commented Dec 8, 2023

I don't know if we're talking at cross purposes. Yes for doing the volume snapshot we already use Velero + vsphere-plugin. BUT Velero can't move the PVC backup data to S3 afterwards WITHOUT the privileged access and hostpath.

It is as draghuram has already said. The way Velero do it is the point.

CloudCasa also offloads snapshots to S3 by directly creating a PVC from the snapshot and mounting it in a mover Pod. This way, you don't need privileged access. I think Velero does create a temporary Pod but actual backup still happens from the node agent Pod which requires privileged access. I believe this allows existing file system code to be used but I will let maintainers to chime in.

@Lyndon-Li
Copy link
Contributor

Lyndon-Li commented Dec 8, 2023

Yes for doing the volume snapshot we already use Velero + vsphere-plugin. BUT Velero can't move the PVC backup data to S3 afterwards

Velero + vsphere-plugin DOES move data to S3 (WITHOUT the privileged access and hostpath).

@dmaggo
Copy link
Author

dmaggo commented Dec 8, 2023

From the documentation: https://velero.io/docs/main/csi-snapshot-data-movement/

VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)

You need to enable the Allow Privileged option in your plan configuration so that Velero is able to mount the hostpath.

The hostPath should be changed from /var/lib/kubelet/pods to /var/vcap/data/kubelet/pods

hostPath:
path: /var/vcap/data/kubelet/pods

@Lyndon-Li
Copy link
Contributor

Velero + vsphere-plugin doesn't go through CSI snapshot data movement, so it doesn't require host-path.

@dmaggo
Copy link
Author

dmaggo commented Dec 11, 2023

Maybe I understand it all wrong, but even in the "Install Velero Sphere Plugin" documentation it says that "allow privileged" must be enabled. Can you or someone else please explain to me what the misunderstanding is?

image

@Lyndon-Li
Copy link
Contributor

@draghuram
Velero+vsphere-plugin doesn't go the same path with Velero data mover so it doesn't require host-path access. This is all I know, but this document is out of my knowledge and the scope of Velero upstream. Please confirm it with TKG supports.

@Lyndon-Li
Copy link
Contributor

About the other topic (doing data mover in a pod), we are planning to do similar things in a larger scope to solve some more problems, see issue #7198.

@Lyndon-Li
Copy link
Contributor

Looks like we have reach the agreement for the both topics, then I will close this issue and keep #7198 open.
Feel free to reopen this issue for any further requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants