-
-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to create multiple backup.tar.gz based on the input array of directories #29
Comments
The reason I'm avoiding multiple sidecars is the logistics in scheduling the backups as I don't think it's too good an idea to have multiple sidecars backing up at the same time |
I personally run a setup where we have a host running 7 applications that are getting backed up by 7 distinct container running on 7 distinct schedules (all of them are daily so it's pretty arbitrary), which is probably the reason things are working as they are right now. Your proposal however makes sense. I think it could be implemented in a way that if you set
docker-volume-backup/cmd/backup/main.go Line 90 in ad7ec58
file that is handled a list of files instead.
Would this API make sense to you? The downside in this scenario would be that this means all apps are down while the archives are being created and you have no more granularity in starting / stopping containers while their data is being backed up. This is the reason I ended up using the solution as described in the beginning btw: one of my volumes is so big that it takes ~2 minutes to create the archive and I didn't want all of my services being down during that time. |
Yeah you have a point there, some of my backups are in the 10s of Gs so that would take a while. And as you say the rest of my "daily" group would be down while this was going on. Not sure if it's possible to combine your suggestion with multiple matching labels? |
But otherwise for smaller not so important containers your suggestion makes sense and I could definitely use that functionality. |
Considering you can already run multiple containers in parallel in case you need granularity on stopping I think coming up with a rather brittle/confusing API that conflates backup sources and stop labels isn't really a good option here. Adding support for multiple sources is a good idea though which I'd like to add as soon as I find the time to do so. If you (or someone else) would like to work on this, let me know and I'm happy to help with getting this merged. |
I was thinking about this further and there is another API challenge to solve when implementing this: Backing up multiple sources into multiple archives means we also need to define a way of specifying multiple target filenames. Right now consumers set I can see two options right now: Making
|
I think this again makes the complexity a bit too high, I quite like how simple it is to use this image. |
After moving this around in the back of my head for a little longer, I think this is how it could work:
Open questions:
|
Something like https://github.com/fsnotify/fsnotify could trigger a config reload. |
This is now possible as of |
As far as I can tel, it works by taking everything in
/backup/*
and adding that to a single tar.gz file.My use case is as follows:
I run this in docker-compose separately from the running containers.
All my containers data dirs are located in lets say:
/docker-volumes/app1
/docker-volumes/app2
etc.I have grouped them by schedule for backups, eg. nightly, weekly etc. and have an instance of docker-volume-backup running per schedule, mapping the relevant volumes to them.
I've gotten the label feature to work brilliantly stopping all the "nightly" ones at night, backing them up, then starting them back up.
I can't seem to figure out an easy way to have this do the backups "one by one".
What I would like to see is an option to have every dir in
/backup/*
be treated as a single backup so that I could haveApp1, App2, App3 in separate tar.gz files instead of a single tar.gz with App1, App2, App3 inside.
Is the only alternative to run multiple of these containers?
Thanks for a great little container!
The text was updated successfully, but these errors were encountered: