-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable velero backups by using local instead of hostpath #85
Comments
I'm looking at this too, would LOVE to use Velero with local-path-provisioner, especially with k3s. @damoon have you made the change mentioned and tried it with success? I'm going to try to do it this week but wanted to know if you saw any issues yourself. |
i have not implemented or tested anything yet. for now i use https://github.com/maorfr/cain to backup cassandra. |
In order to enable Velero backups, I implemented this change locally. It seems to work fine. I didn't replace the use of hostPath in the "helper pods", I wasn't sure if that makes sense. It might be useful to replace as well, if only to not depend on the availability of hostPaths. Are you interested at all in a pull request with my changes? It would need some further work on documentation to make it complete. |
@damoon I remembered the reason we're not using Local volume at first is it requires an existing directory, while HostPath can create it automatically for you. But later we'd introduced the helper pod on creation to help on the directory creation anyway, so it shouldn't matter now. @ariep PRs are welcome, drafts are fine too. You said you didn't change the usage of HostPath, so what part did you change? |
I am very interested in being able to use Velero with the |
OK good, I opened a pull request with only the minimal changes I made thus far. It would require more work, at least in documentation.
I did change the PersistentVolume objects that are created, replacing HostPath by Local. I didn't change the temporary hostPaths in the helper pods though. |
@ariep The code part of PR looks good to me. Thanks. Can someone else give the PR a try? Make sure it works for Velero at least. |
We found an issue with |
how does the issue look like? |
@damoon IIRC, the path cannot be found after created on the host (by the helper pod) while using RKE, so the Pod gets stuck on the pending state. |
Perhaps this can be optional?
cannot be found by what, exactly? |
@BenTheElder We will need to look into the issue more to find out why. Maybe we can make it optional. So does KIND prefer local volume too? |
well the problem was probably |
@schmitch The directory creation part is already there since we're deploying the helper pod to create the directory. It seems the problem is RKE is containerized and might have issues with the local volumes. I will take a deep look later to see if it's RKE's issue. |
Yes, confirmed it's an RKE issue. |
what if there is a daemonset that create the directory? |
@yasker Any updates to this? |
@yasker Bump for updates! |
I would also like this functionality, and give a +1 for making it optional. I am also looking to use Rancher / RKE, so I had a look into how to get kubernetes local volumes to create. I posted a reply on the Rancher issue: To get local volumes working on an RKE cluster extra_binds must be set for kubelet. Note: the following config, will not fix this provisioner unless the local volume support is readded. services:
kubelet:
extra_binds:
- "/opt/local-path-provisioner:/opt/local-path-provisioner" The directory I don't know to make this as seamless a process on rancher as possible, but it would still be good if it could be made to work with some configuration tweaks. |
Any update on this ? I would love to use Velero with local-path-provisioner on k3s. |
I am also interested in this feature. Any Updates would be great.. |
Need it too, up |
1 similar comment
Need it too, up |
👍 Bumping this again. |
Had the same trouble with velero. Like this too 👍 |
I made a fork with cherry picked commit from #91 here: https://github.com/kmjayadeep/local-path-provisioner Docker image is pushed to : I have tested velero backup using restic in my k3s setup and it seems to work fine. Those who really need this feature, feel free to use the above tag |
For anyone trying to move from hostPath to Local type provisioner... The easy way to change config in k3s is: to set to kmjaydeep fork to go back to original (k3s 1.23 version) When set to "local type" you have to recreate volumes (they're not updated automatically). |
Here's another fork which uses HostPath for persistent local storage: https://open.greenhost.net/stackspin/local-path-provisioner/ |
The code change is still really remarkably small - would it be a plausible path forwards to allow configuring whether to use Local Volumes vs HostPaths as part of the config? That way RKE could keep using HostPath while allowing us Velero users to make backups. |
If that is a path forwards that the maintainers would like (@yasker?) I'm happy to implement it! |
Linking a project I made to help me migrate my home-lab cluster volumes to use local volume types. Has the ability to update the pvcs defined on flux/k3d helm chart resources. Here is the repo: https://github.com/AnthonyEnr1quez/local-path-provisioner-volume-converter Can be ran using the binary in the repo releases. Make sure to back up any data that you don't want to lose before using. I haven't had any data loss but that just for my use case. Let me know if you choose to use it! If any issues are encountered, log an issue in the repo |
Although this issue is now closed, the used approach leaves much to be desired. One now has to set an annotation on any PVC to make sure you get a local volume instead of a hostpath one. This is very unfortunate in my opinion: on a cluster with workloads from various sources, it's a pain to have to add this annotation on all PVCs in myriad locations. Think only of third-party helm charts that don't even have a helm value for setting arbitrary annotations on the generated PVCs. Generally speaking, this means implementation-specific information about the cluster (local volumes) leaks through the API interface to the cluster workload. A PVC really shouldn't need to care about that. I appreciate that in some cases it can be useful to have hostpath volumes and local volumes side by side. Would it be possible to have both the annotation approach as now implemented, and a global toggle to determine the default in case of no annotation? In fact, if you have the global toggle, the annotation feature is not really necessary, because I think you could just run two copies of For now, I'm afraid in our project we'll have to keep using our fork of |
hi @kmjayadeep , did you try this for backup of data as well ( Database, minio etc..,) for me the manifest backup is working as expected but data backup is not working: resource: /persistentvolumes name: /pvc-2c5529ee-43bb-4559-b3f5-0168c255142c error: /unable to get valid VolumeSnapshotter for "rancher.io/local-path Any input on above issue |
by using local instead of hostpath it would be possible to use velero with restic for backups.
velero with restic for volume backups can not support to backup host-path, but local is supported.
i use cassandra with rancher/local-path-provisioner for volumes to get bare metal disk performance, but backups are also nice...
velero limitations
https://github.com/vmware-tanzu/velero/blob/master/site/docs/master/restic.md#limitations
host-path vs local
https://kubernetes.io/docs/concepts/storage/volumes/#local
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
as far as a have read the code, the only place that would need to change is
local-path-provisioner/provisioner.go
Line 212 in 655eac7
does a technical reason exist not to use local?
would it break current deployments to have a mixture of volumes types?
is a merge request welcome?
The text was updated successfully, but these errors were encountered: