You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you checked borgbackup docs, FAQ, and open GitHub issues?
Yes
Is this a BUG / ISSUE report or a QUESTION?
Question
System information. For client/server mode post info for both machines.
Your borg version (borg -V).
1.2.8
Operating system (distribution) and version.
Ubuntu latest LTS
Hardware / network configuration, and filesystems used.
dedicated server, 1gbs link, ext4
How much data is handled by borg?
310GB
Full borg commandline that lead to the problem (leave away excludes and passwords)
I wonder how to properly scale backups with Borg.
I have around 30 hosts that I need to backup to a backup server.
It works, but from times to times I encounter an error that shows that the latest backup on a host could not acquire the lock. And when I check the lock is gone so I think this is a case where there was simply another host whose backup had not finished yet.
The backups run every night. I have one repo on the backup server.
I can try to spread the backups timers on all hosts to try and avoid collisions. But it is just guess work and at some point if one has more data to backup then it risks taking more time and locking the repo for another host.
The other option I can think of is to create multiple repos on the backup server. Ideally one repo for each host to backup. So they won't lock each other.
Are there other strategies ? What would make sense ?
Thanks for your help.
The text was updated successfully, but these errors were encountered:
Regarding queuing up, I saw the --lock-wait option as a last resort. My time window for backups is during the night. And by queuing there is a risk the queue starts extending past that limit.
I guess the amount of data to get to that point must be beyond what I actually need to backup and that should not happen. But I need to check how long that actually takes.
I'm interested in the tradeoffs between queuing and more parallelism. Thanks for pointing towards the deduplication, I did not think about it.
And more broadly I wonder what people usually do in this situation. Is queuing up working for most people ? At what point is it better to use separate repos ?
If you have some pointers about that I would be happy to read them.
Have you checked borgbackup docs, FAQ, and open GitHub issues?
Yes
Is this a BUG / ISSUE report or a QUESTION?
Question
System information. For client/server mode post info for both machines.
Your borg version (borg -V).
1.2.8
Operating system (distribution) and version.
Ubuntu latest LTS
Hardware / network configuration, and filesystems used.
dedicated server, 1gbs link, ext4
How much data is handled by borg?
310GB
Full borg commandline that lead to the problem (leave away excludes and passwords)
I wonder how to properly scale backups with Borg.
I have around 30 hosts that I need to backup to a backup server.
It works, but from times to times I encounter an error that shows that the latest backup on a host could not acquire the lock. And when I check the lock is gone so I think this is a case where there was simply another host whose backup had not finished yet.
The backups run every night. I have one repo on the backup server.
I can try to spread the backups timers on all hosts to try and avoid collisions. But it is just guess work and at some point if one has more data to backup then it risks taking more time and locking the repo for another host.
The other option I can think of is to create multiple repos on the backup server. Ideally one repo for each host to backup. So they won't lock each other.
Are there other strategies ? What would make sense ?
Thanks for your help.
The text was updated successfully, but these errors were encountered: