-
-
Notifications
You must be signed in to change notification settings - Fork 756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DOC] #1552 - Add a pull backup / push restore how-to #4805
[DOC] #1552 - Add a pull backup / push restore how-to #4805
Conversation
Backport from master.
Please add a note that SSHFS can be extremely slow over high latency network (and even moderate latency network). This also means borg does all the checksumming over the network which is also very slow. I tried a setup like this and it took multiple days to backup a server over 100 Mbit LAN (~40GB) and over a week to backup 200GB over internet (only 16Mbit DSL but it should only take 1-2 days). The network link saturation was very low most of the time. This is definitely a working solution for pull style backup but it does not seem very feasible to do over the internet. |
Thank your for this information.
Can you give a source where this is explained? |
That reminds me of something that needs to be tried and results documented in the "pull via sshfs" docs: the borg files cache speeds up backup a lot if it can work correctly, by default this means for a file to be considered as unchanged, these conditions must be met:
If that is not fully supported, tweaking via |
IETF has a draft of SSH file transfer: https://tools.ietf.org/html/draft-ietf-secsh-filexfer-13
If you can specify the procedure that should be tried, I will do it and document the results. |
I checked again and benchmarks don't seem to support my claim that sshfs is slow in general. So it may have something to do with file cache. It will probably never reach the speeds that can be achieved by running borg locally with a connection to borg serve. That might be the reason why people over at #900 are using a weird hack with socat. There are some optimizations listed in the article, I might try using them some day. SSHFS will have fluctuating inode numbers because it assigns own inode values to files. It seems to simply count up. The first accessed file will get inode 1, the second inode 2 etc. The folks at restic seem to have had the same problem. See restic/restic#1631. I might try changing the --files-cache option to ctime,size. |
See https://borgbackup.readthedocs.io/en/stable/usage/create.html for Added some stuff in master PR #4804. |
Backport from master.