-
-
Notifications
You must be signed in to change notification settings - Fork 753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the "hacked server" issue #36
Comments
restic/restic#187 similar discussion. |
Just to keep an idea (could be added to docs / faq after discussion): If one runs either LVM or btrfs (or anything else that supports snapshots) on the backup repo server, one could just do regular snapshots of the volume that has the repo. That way, even if somebody deletes the repo contents remotely via borg from the production machine, one could still roll back to a previous snapshot. |
In my setup I have different private keys for each of my web servers stored on my backup server. During the day there is no backup script or ssh key/repo key stored on the web servers. At the backup server, I have a script which pushes the another script along with ssh key and borg key to each web server. I also sync ~/.cache/attic back and forth between the backup server and each web server. After each backup for each web server is finished, I remove the backup script and the keys from the web server. I do all this because of two things. One for a bit more security (almost through obscurity) and at the same time some peace of mind. I can changes for each backup at one place. I also share one repo for all of my web servers so that I get the dedup effects. All this is very hacky. I would really like to have some kind of setup where all this is covered inside borg. Maybe borg on the backup server could start up a server over ssh to the web server, and talk over stdin or some kind of reverse ssh tunnel? This would be like the rsync-over-ssh model. My main concern is that I dont trust my web servers 100%, but I trust my backup server a lot more, as I have put a great deal more care for it than all the web instances. |
how about a |
@RonnyPfannschmidt "create" requires "set" k/v ops, as "delete" does also. |
if you only allow adding new keys thats fine alternatively there still is the idea of a bundle operation (which bundles up a local compressed set of k/v pairs and ships it to a server) |
how can we possibly keep this from happening while at the same time allowing backups to be purged? another backup tool, restic, completely ignores that requirement, stating from the start that it is outside its threat model - maybe it's something we should consider? writing up a threat model, actually, would be quite useful. :) as an aside - i think the only solution to this problem would be to allow pull operation (jborg/attic#175), which has other uses as well... |
Allowing purges is not a requirement for a machine that just needs to get backed up. But the real problem lies a bit deeper: the key value store does not "know" what precisely you are doing, it just sees key/value operations. |
a copy of a comment i made in jborg/attic#175 (pull mode):
|
The problem with this id performance when you have lots of folders. Even if no files are changed it takes ages to check state. On 6 October 2015 07:22:27 CEST, anarcat notifications@github.com wrote:
Sent from my phone. |
not sure how you are going to work around that problem if the remote end is behind an SSH tunnel. you need to list those directories and stat those files somehow. |
You solve that by having a remote binary making file list locally. Like rsync. On 6 October 2015 08:00:29 CEST, anarcat notifications@github.com wrote:
Sent from my phone. |
ah yes, i see your point. i guess that would be the |
As I just commented in another ticket, one really wants borgbackup on both sides of the connection - otherwise you will transmit duplicate file contents over the network before they can be detected as such. |
I don't mean to hijack this as a Q&A, but the other issues on "pull mode" seem to point here. Is there any way currently in which one can control/configure automated backups from the remote server? I've been evaluating tools to replace our current back up setup and centralising configuration and automation on the backup server just makes sense to me. Would the recommended way be to use the ssh key with the borg serve command as described in this attic PR?: https://github.com/lfam/attic/blob/serve-docs/docs/quickstart.rst#remote-repositories . I guess then you can script the remote server to SSH on to each client server and initiate it that way? I've not used borg yet just evaluating it to hopefully replace my current set up so apologies for my ignorance :). |
@mikecmpbll this is still something that needs some research, experimentation, I don't have a ready-to-use solution for that. The docs piece you have linked (which should be also in our docs btw, not just in that PR) is just about restricting the push target path, so you can separate different backup clients without needing separate repo server users. |
ah i see! no problem, thanks for clarifying for me :) |
@mikecmpbll we use Puppet to deploy those SSH keys, for rdiff-backup, but the same principle would probably apply here. i would say it's out of scope, personnally, but i'm curious to hear what you had in mind. |
@anarcat with rsync you can do either do a push ( |
oh right, yes, this is what is discussed here and no, it's only one way right now (client -> server). |
hi i would really like to see a pull feature as well as i need to schedule and manage backups from a central server (so more usability than security motivated). i just started to skim over the code to get an idea how to implement it but if anyone is already on it he/she might give me a heads up? |
AFAIK nobody is working on this right now (in general it is a good idea to mention such a fact in a ticket, so double work is avoided, or get "assigned", see metadata at the right). |
This was a suggestion on IRC, I didn't try it yet:
If you do, give some feedback here. |
Hello, i have just built a working pull solution for borg with a central backup server, what do you think?
Test:
|
Here's how I do pull backups with Borg (pull = the machine that is backed up has no access to the backups, push = the machine that is backed up has access, so everything using
Ideally we'd want to restrict root even further and place additional restraints. E.g. no access to /dev /proc /sys and the like, additional sys call filtering (seccomp2) below the SFTP server to kill it when it attempts link/unlink/open(write) (there are ~400 hundred syscalls or so in Linux, and this sftp server only needs a dozen) etc. Also, capabilities could be used as well. (But wouldn't be effective alone, since they can't restrict writes. But it would allow pull backups made with an unprivileged user (CAP_DAC_OVERRIDE). Proper seccomp restrictions (and disabled login, shell etc) like above would make this safer than simply root on sftp-server -R. Additionally, namespaces can be employed to disable e.g. network access completely. (sftp-server does not need network). (for example It should be said that all of these restrictions are meaningless if a key pair exists on the machine that is also authorized for it. Similar issues apply to other access-controlling software as well, not just SSH (it's the obvious go-to, though). So while in a pull backup scenario your backup server can get hacked without instant-root access to all backed up machines, that's about all you can achieve. It's a VERY strong position of the attacker to hold (all data, past and present, including e.g. SSH host keys). |
while this is great, it seems to me it doesn't accomplish everything that was mentionned in the original report here. jborg/attic#175 for example, is a completely different deployment model: instead of all machines having access on a central server (so being able to access each other's backups in case of an intrusion on the backup server), a pull mode doesn't extend the attack surface on the clients... furthermore, the append-only mode doesn't allow purging old backups at all, which can be an issue with high rate change datasets. should that be a different feature request? |
You can go back-and-forth between append-only and normal mode. It's like that little read-only slider on a floppy. It needs to be ensured that the "untrusted" clients can't access the repository while it's in normal mode for purging. A script could, for example, remove the client keys from the authorized_keys file, disable append-only, purge, enable append-only, enable client keys. It might be useful to include some helpers into future versions to make it easier, e.g. --ignore-append-only which would only work locally / as an option to borg serve (which is controlled with force_command). -- Similar to how some drives can ignore the read-only slider :-) Pull backups are possible with e.g. sshfs, I don't really see a reason to built this into Borg, when it works easily with external tools? btw. there is now --ignore-inode which avoids the "full retransmission" issue with sshfs. From a security point of view the borg-rewrite branch could become interesting when / if it's supports a --target option. (It would allow a "two-hop" setup, where each client has a repo where only the last archive is kept; when a backup finished, another job copies the archive away into the real repository(ies). That way a "client attacker" can only wipe a temporary backup, and can also not access other backups. |
sshfs was discussed earlier: #36 (comment) the problem is it's slower than having a local client. |
I think the issue he mentions is because prior to --ignore-inode the files cache essentially didn't work with sshfs (because inode numbers are not stable). I'm using --ignore-inode and sshfs: it's slower, but not much (approx 5-8 minutes for ~600k files / 100 GB). Compare: it takes less than 2 minutes for ~550k files / ~20 GB (which are on a SSD), or about 10-15 minutes for 1.2 million files / 285 GB (which are on a magnetic drive). |
#900 i opened that for the "pull mode" part of this. |
Related / continued in #1772 |
Actually they can, through CAP_DAC_READ_SEARCH. But the caveats from above still apply, likely making it root anyway. |
Are you still following this or have you found a more elegant approach? I am thinking of setting up a .env file in an encrypted folder and temporarily decrypting it through an api call from a lambda/cloudflare function right before my cron jobs run. My idea also is very makeshift and I was wondering if there is a better way to do cron backups without exposing my ssh or borg passphrase |
If a server that gets backed up with borg gets hacked, the attacker potentially could also invoke borg to delete the backups from the (remote) backup.
See also these related attic issues:
💰 there is a bounty for this
The text was updated successfully, but these errors were encountered: