Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluate redundancy / error correction options #225

Open
ThomasWaldmann opened this issue Sep 30, 2015 · 65 comments
Open

evaluate redundancy / error correction options #225

ThomasWaldmann opened this issue Sep 30, 2015 · 65 comments

Comments

@ThomasWaldmann
Copy link
Member

ThomasWaldmann commented Sep 30, 2015

There is some danger that bitrot and storage media defects could lead to backup data loss / repository integrity issues. Deduplicating backup systems are more vulnerable to this than non-deduplicating ones, because a defect chunk affects all backup archives using this chunk.

Currently, there is a lot of error detection (CRCs, hashes, HMACs) going on in borgbackup, but it has no built-in support for error correction (see the FAQ about why), but it could be solved maybe using one of these approaches:

  • use borg to have N (N>1) independent backup repos of your data on different targets (if N-1 targets get corrupt then, you have still 1 working left. note that there is no support to create one non-corrupt repo from 2 corrupt repos, although that might be theoretically possible for some cases).
  • snapraid
  • par2
  • FECpp https://github.com/randombit/fecpp (BSD, C++ - make available via Cython?)
  • zfec (GPL/TGPPL, Python 2.x only, PR for Python 3.x exists)
  • RAID (and monitor and scrub the disks), ZFS mirror or RAIDZ* (better not use raid5 or raidz1)
  • zfs copies=N option (N>1)
  • specific filesystems
  • ceph librados
  • https://github.com/Bulat-Ziganshin/FastECC

If we can find some working approaches, we could add them to the documentation.
Help and feedback about this is welcome!

@oderwat
Copy link

oderwat commented Sep 30, 2015

I think "bit rot" is real. Just because of the failure rate exceeding the storage size for real huge hard drives. So statistically there will be a block error on a drive when it is big enough. But that probably has to be solved by the filesystem. On the other side borg is a backup system and should be able to recover from some disasters. Bup is using par2 (on demand) which seems to work for them.

@anarcat
Copy link
Contributor

anarcat commented Oct 1, 2015

links to mentionned projects:

  • snapraid - backup and snapshot system with multiple redundant disk support
  • zfec - generic erasure coding (aka RAID-5 but with customizable number of copies) in Python, used by Tahoe-LAFS
  • par2 - commandline-only tool to do erasure coding, i have implemented support for it for bup in bup-cron and it works well

I think zfec is the most interesting project here for our purposes, because of its Python API. we could use it to store redundant copies of the segments' chunks and double-check that in borg check.

anyone can also already run par2 or zfec on the repository through the commandline to accomplish the demanded feature.

i am not sure snapraid is what we want.

so i would recommend adding par2 as a temporary solution to the FAQ and implementing zfec eventually directly in borg, especially if we can configure a redundant drive separately for backups.

@tgharold
Copy link
Contributor

tgharold commented Mar 8, 2016

One option for those of us with extra disk space, would be to allow up to N copies of a chunk to be stored. This is a brute-force approach which would double or triple or quadruple the size of the repo depending on how many copies you allow.

Another idea is to allow repair of a repository by pulling good copies of damaged chunks from another directory. So if my primary backup repository happens to get corrupted, but I have an offline copy, I could mount that offline copy somewhere and have a repair function attempt to find good copies of damaged chunks from that directory.

PAR2 is nice, but maybe a bit slow. I don't know if PAR3 (supposedly faster) ever got off the ground.

@enkore
Copy link
Contributor

enkore commented Apr 2, 2016

FEC would make sense to protect against single/few bit errors (as opposed to a typical "many sectors / half the drive / entire drive gone" scenario). It would need to sit below encryption (since a single ciphertext bit flip potentially garbles an entire plaintext block). Implementing it on the Repository layer transparently applying to all on-disk values (and keys!) would make sense. Since check is already "local" (borg serve-side) it could rewrite all key-value pairs where FEC fixed errors. No RPC API changes required => forwards/backwards compatible change.

The C code from zfec looks small. If it doesn't need a whole lot of dependencies and the LICENSE permits it (and it, of course, being a good match to our requirements[1]) we could vendor it[2] if it's not commonly available.

[1]

  • Data independent (I think this is true for all but speciality A/V FECs)
  • Doesn't require a particular data size, or if it has padding requirements they are small (512 Byte would be a little wasteful)
  • Configurable ("how much can it rot before it can't be recovered")
  • Of course: LICENSE compat, availability, tested, proven.

[2] Vendoring should be a last-resort thing, since it more or less means that we take on all the responsibility upstream has or should have regarding packaging/bugs/testing etc.

@ThomasWaldmann
Copy link
Member Author

If just a single or few bits flip in a disk sector / flash block, wouldn't that be either corrected by the device's ECC mechanisms (thus be no problem) or (if too many) lead to the device giving a read error and not giving out any data (thus resulting in much more than a few bit flips for the upper layers)?

@enkore
Copy link
Contributor

enkore commented Apr 2, 2016

Hard drive manufacturers always seemed to tell the "either it reads correct data or it won't read at all" story. Otoh: https://en.wikipedia.org/wiki/Data_corruption#SILENT

Somewhat related (but further discussion for separate issue) is how data integrity errors are handled. Currently borg extract will throw IntegrityError and abort, but that's not terribly helpful if it's just one corrupted file or chunk. Log (like borg create does IO errors) and exit 1/2 instead?

@ThomasWaldmann
Copy link
Member Author

@enkore yes, aborting is not that helpful - open a separate ticket for it.

@enkore
Copy link
Contributor

enkore commented Apr 6, 2016

Hm, this would also be interesting for the chunk metadata. We could pass a restricted subset of the chunk metadata to the (untrusted) Repository layer, to tell it what's data and what's metadata[1]. That would allow the Repo layer to apply different strategies to them. E.g. have metadata with a much higher FEC ratio than data itself.

[1] Technically that leaks that information to something untrusted, but I'd say it's fairly easy from the access patterns and chunk sizes to tell (item) metadata and actual files apart. Specifically, item metadata is written in bursts and should be mostly chunks of ~16 kB. So if an attacker sees ~1 MB of consecutive 16 kB chunks, then that's item metadata.

@dfloyd888
Copy link

This would be useful as part of a repository, in the config file. It would be nice to have a configurable option to allow for ECC metadata at a selected percentage. I know that one glitch or sync error with other backup programs can render an entire repository unreadable. I definitely hope this can pop up as a feature, as it is a hedge against bit rot for long term storage.

@aljungberg
Copy link

The FAQ recommends using a RAID system with redundant storage. The trouble there is that while such a system is geared towards recovering from a whole disk failing, it can't repair bit rot. For example consider a RAID mirror: a scrub can show that the disks disagree but it can't show which disk is right. Some NASes layer btrfs on top of that which in theory could be used to find out which drive is right (the one with the correct btrfs checksum) but at least my Synology NAS doesn't yet actually do that.

So any internal redundancy/parity support for Borg would be great, and even just docs on sensible ways to use 3rd party tools to get there would work too. Maybe it's as simple as running the par2 command line tool with an X% redundancy argument.

@ThomasWaldmann
Copy link
Member Author

Correct, a mirror doesn't help for all cases. But for many cases, it for sure can.

The disk controller generates and stores own CRC / ECC codes additional to the user data and if it reports a bad sector on disk A while doing a scrub run, it can just take the sector from disk B and try to write it back to disk A (usually, write is successful and sector is repaired, otherwise disk is defect). So it is only the unfortunate case when a mirrored sector on both disks does not have same data, but the CRC / ECC error is not triggered on any of both disks (which is hopefully rather unlikely) or if both sectors give a CRC/ECC error.

Important is that scrub runs take place regularly, otherwise the corruption will be undetected and, if one is unlucky, a lot of errors creep in until one notices - and if both sides of the mirror get defect in same place, the data is lost.

Similar is the case for RAID5/6/10 arrays.

What still can happen is that the controller decides suddenly that more than the redundant number of disks are defect and kicks them out. Or that more disks go defect while the array is rebuilding. But that is a fundamental problem and you can't do much about it aside from having lots of redundancy to make this case unlikely.

zfs also has own checksums/hashes of the data also, btw (and is maybe more production-ready than btrfs).

@FelixSchwarz
Copy link
Contributor

FelixSchwarz commented Mar 20, 2017

zfec is currently not compatible with Python 3 but there is a pull request. Also having a Python API is of course much nicer than calling out to a separate binary (+ zfec is supposed to be faster according to zfec's pypi page).

par2 on the other hand is likely present on more distros and the format seems to be widely used (other implementations/tools available). However the ideal solution for borg would use the error correction internally (otherwise encrypted repo would face quite a bit of storage overhead) so having external tools might not be that useful.

Even with good storage I'd like to see some (ideally configurable) redundancy in borg repos. Deduplication is great but I think it is more important that data is safe (even on crappy disk controllers).

Maybe a good task for 1.2?

@enkore
Copy link
Contributor

enkore commented Mar 20, 2017

Maybe a good task for 1.2?

1.2 has a set of defined major goals, since this would be a major effort it's unlikely.

@ThomasWaldmann
Copy link
Member Author

@FelixSchwarz thanks for the pointer, I just reviewed that PR.

But as @enkore already pointed out, we rather won't extend 1.2 scope, there is already a lot to do.

Also, as I already pointed out above, I don't think we should implement EC in a way that might help for some cases, but also fails for a lot of cases. That might just give a false feeling of safety.

@gour
Copy link

gour commented Apr 1, 2017

Also, as I already pointed out above, I don't think we should implement EC in a way that might help for some cases, but also fails for a lot of cases.

Does it mean that EC won't be supported/implemented at all in/within Borg, or you're just considering what would be the proper way to do it?

@enkore
Copy link
Contributor

enkore commented Apr 1, 2017

There are a lot of options in that space and evaluating them is non trivial; fast implementations are rare as well. On a complexity scale I see this issue on about the level of a good master's thesis (=multiple man-months of work).

Note that a lot of the "obvious" choices and papers are meant for large object-storage systems and use blocked erasure coding (essentially: the equivalent of a RAID minus the problems of RAID for an arbitrary and variable amount of disk/shelf-level redundancy). This is, that we can say already, not an apt approach if you have only one disk.

@ThomasWaldmann
Copy link
Member Author

@gour If we find a good way to implement it (that does not have the mentioned issues); I guess we would consider implementing it.

There is the quite fundamental issue that borg (as an application) might not have enough control / insight about where data is located (on disk, on flash).

Also, there are existing solutions (see top post), so we can just use them, right now, without implementing it within borg.

@anarcat
Copy link
Contributor

anarcat commented Apr 30, 2017

just found out about this which might be interesting for this use case:

https://github.com/MarcoPon/SeqBox

@enkore
Copy link
Contributor

enkore commented Jun 5, 2017

zfec is not suitable here. It's a straight erasure code; say you set k=94, m=100, meaning you have 100 "shares" (output blocks) of which you need >=94 to recover the original data. Means 6 % redundancy, right? No! Those 94 shares must be pristine. The same is true for all "simple" erasure codes. They only handle erasure (removal) of shares, they do nothing about corrupted shares.

A PAR-like algorithm which handles corruption within shares, i.e. you have a certain percentage of redundancy and that percentage can be corrupted in any distribution across the output, is what we need here. (?)

Edit: Aha, PAR2 is not magic either. It splits the input into slices and uses relatively many checksummed blocks (hence its lower performance) which increase resistance against scattered corruption. So what appears at first like a share in PAR2 is actually not a share, but a collection of shares.

@xenithorb
Copy link

xenithorb commented Jun 8, 2019

I'm less concerned about copying archives between repos, more concerned with how to maintain redundancy if the project keeps recommending to maintain two independent repos and one becomes corrupt.

Consider that the thing I want to restore some day is in a very old archive and I'm stuck with running 1 remote and 1 local backup like the scenario above. The remote one becomes corrupt and unusable, and so if I want to continue to maintain backups from that point I must recreate the remote and it only has a history from that point.

Later, the local copy becomes corrupt because I fat fingered a command and deleted some stuff I shouldn't. Oops, now I can't restore the thing I didn't know I needed yet from that very old archive because it's gone, the second remote repository is no longer useful because the redundancy wasn't maintained.

I understand that having a feature to copy archives between repos could address this, but I'm hoping for ways to address that outside of borg, and perhaps an update in the documentation that informs users that they risk a redundancy downgrade if they lose a repo that's presently not recoverable from.

@textshell
Copy link
Member

Currently the only safe way is to do a full copy of the repo that is still ok and never use that copy for any writing operations. And then begin a fresh repo to still keep redundancy.
I don't think "copy archive(s) from repo A to repo B" is much harder to implement than "change internally used key to make writing to a cloned repo safe again" so i think what you are looking for in that case is the copy feature. Which is not rocket science but a decent chunk of work to implement.

@jdchristensen
Copy link
Contributor

If you aren't using encryption, you can copy the repo and change the repo id in the config file. After that, you can use the repos independently. If the repo is encrypted, doing this introduces a security issue. I wonder if there could be a borg command to adjust the copy of the repo to make the security issue ("counter reuse") go away?

@elho
Copy link
Contributor

elho commented Jun 10, 2019

I backup to 1.A and 1.B for 10 days with a daily borg create script. 1.A becomes corrupt, so the only option you're left with to continue is:

* `1.A` - Fresh recreated, no history. If `1.B` is corrupt you lose 10 days of history

* `1.B` - has 10 days of history

No, assuming the corruption of 1.A did not affect meta-data in a way rendering it unusable, the approach as I planned it out, should I face that situation, would be:

  1. borg check --repair backupserver:1.A, which will filll corrupted chunks with zeroes.
  2. Find the files affected by these replacement chunks: for archive in $(borg list --format '{barchive}{NEWLINE}' backupserver:1.A); do borg list --format "{health} ${archive} {path}{NEWLINE}" "backupserver:1.A::${archive}" | grep --invert-match '^healthy'; done (a correct approach would use {NUL} and xargs --null invoking a wrapper script to handle funky file names)
  3. Get all chunks resulting from the original data of these files back from 1.B into 1.A:borg mount the entire 1.B and do a create a (temporary) backup of all the files found in the previous step from this mount to 1.A. They will be extracted and rechunked, but all but the corrupt chunks will turn out to already be present on 1.A, thus only the ones that need replacing will be transferred.
  4. borg check --repair backupserver:1.A once again, which will now find correct new chunks for every zeroed replacement chunk and repair them all.
  5. borg delete the temporary archive holding the "replacement" files.

@ThomasWaldmann
Copy link
Member Author

@elho sounds good, assuming that files in 1.A and 1.B archives are identical (like 2 backups made from same snapshot).

@elho
Copy link
Contributor

elho commented Jun 10, 2019

Yes, backing up (and verifying) from snapshots only has become so natural to me, that I keep forgetting to mention it. Thanks for pointing out, that it is fundamentally important in this use case!

@enkore
Copy link
Contributor

enkore commented Jun 23, 2019

It's kinda funny how most longer tickets in this tracker seem to converge on a discussion about replication in Borg.

@imperative
Copy link

imperative commented Oct 12, 2019

par2 is quite a mature, stable, popular (relatively), well-understood (relatively) solution with sane options (like variable redundancy and number of blocks, trading well between speed of generation and how small the blocks are (obviously smaller blocks will yield more flexible protection against different kinds of bitrot and/or corruption). I think it would work well as a layer on top of the repository files.

par2 is also somewhat of an explicit and external way of achieving redundancy. It generates .par2 files which the user, could then inspect and use manually, even applying other external programs to do the checking. I think that having this solution in this explicit way is a good idea (compared to for example zfec-based coding somewhere inside borg files, unseen to the user, uncheckable to them, unknown whether they exist or not) because it would create more flexibility against different types of possible corruption. (Whether it is bitrot inside of files, or filesystem failures resulting to complete files missing, etc.)

Incidentally this also goes well with the unix philosophy of having multiple small tools where each of them does one thing really well and then inter-operates with the other tools when needed. Par2 is THE project that does application-level (in external files) redundancy coding really well.

There are already system-level ways to make redundancy and error-correction work: zfs, btrfs, RAID, etc. But the application-level solution exist too, and they do so because of a demand, because it is not always possible to place the files on zfs, or otherwise manually administer the storage subsystem to such a degree that it would be possible, or sometimes the user needs to move the files between different systems, store them in other places (like tapes or offline backup drives or optical media). In all of these cases there is no universal system-level way to achieve recovery redundancy, and having it explicitly in Borg on the application level would be beneficial and provide real world utility.

@dumblob
Copy link

dumblob commented Apr 17, 2020

I hope I'm not too late to the party. I don't want to persuade @ThomasWaldmann nor say the solution "do backup to two different places" is wrong, but as some others have experienced, bit rot and other effects do real and very painful damage, so I'm also raising my hand for FEC (forward error correction).

I just want to point out that implementing FEC for hard disk storage needs more thought than demonstrated in this thread. For a quick intro, read this motivation behind creating the fork (changes to the original app).

Having that said ECC (or other redundancy in HDDs/SDDs/HW_RAIDs/SW_RAIDs/btrfs/other_filesystems), par2, zfec, snapraid, and oll others mentioned in the discussion are not enough and usually flawed at least in some regard (with the notable exception of ZFS RAID redundancy). I'll leave it as an exercise for the reader why e.g. such a "perfect" fs as btrfs does let corrupted data to be used without noticing or why the "battle tested" par2 can't recover data in some cases or how "cloud storage" providers do (not) ensure data integrity...

The above mentioned rsbep-backup (not the original rsbep) is the only tool somewhat satisfying the FEC requirements (it's quite slow despite being nearly the fastest R-S implementation I know about). Its minor downside is, that it still assumes one can find the approximate offset of the file, but that might be solved using some ideas from SeqBox.

One has to think really about "everything" - from the physical stuff (disk blocks, spanning errors, probabilities, ...) up to the "format" which has to guarantee, that e.g. also all metadata has the same recovery guarantees as the data itself as there is the same probability of errors for metadata as for data, etc.

About a year ago I did some investigation how other FEC backup software treats different errors and I was horrified by so many omissions and ignorance (actually I stayed with rsbep-backup from the linked github repo due to that). Please don't let Borg jump on that boat.

@imperative
Copy link

imperative commented Jul 9, 2020

The above mentioned rsbep-backup (not the original rsbep) is the only tool somewhat satisfying the FEC requirements (it's quite slow despite being nearly the fastest R-S implementation I know about). [...]

Is this the one that writes this on its readme page? I quote:

I coded this using Python-FUSE in a couple of hours on a boring Sunday afternoon, so don't trust your bank account data with it... It's just a proof of concept (not to mention dog-slow).

Doesn't sound like a stable legit software that should be used for critical data and backup yet...

I'll leave it as an exercise for the reader why e.g. such a "perfect" fs as btrfs does let corrupted data to be used without noticing or why the "battle tested" par2 can't recover data in some cases

In which cases? Can you be more specific or link to those cases? If you have generated the par2 files, it basically guarantees that if the number of blocks corrupted is lower than the number of recovery blocks, it will recover all of the data. Are you talking about cases where the contents of the files is shifted or deleted, does it have problems in that case?

Are you saying that rsbep0.0.5 is somehow different in this regard? What functionality or advantage does it provide that par2 doesn't? Does rsbep protect from shifting contents? They both seem to use reed-solomon, both work in a similar way. Why use the more experimental one? The documentation on that one just describes the basic reed-solomon code and doesn't even mention par2, so it is unclear if author even knew about its existence.

PS btrfs is infamous for still being unstable and crashing every once in a while. There are very few people who would call it "perfect". Zfs on the other hand does not let one use bitrotten data without noticing.

@hashbackup
Copy link

I posted a few thoughts about this in the restic forum recently: restic/restic#804

While it doesn't currently have ECC wrapping, HashBackup does support multiple destinations and can correct bad blocks in one destination using good blocks from another.

I've so far stayed away from doing ECC for reasons I mention in that post. But what did surprise me a bit is the probability of errors in a multi-TB situation; it was a lot higher than I thought, with numbers like 4% chance of a read error in a 5TB backup repo. I'm still not sure that the numbers are all correct, but still - 4% makes me a little uncomfortable when I know there are lots of people who write their HB backup to a USB drive and it's their only copy.

Copies are important, however you have to make them. IMO, making copies is a simpler strategy than relying on sophisticated and potentially unreliable ECC tech using a single copy. Adding ECC to a multiple-copy strategy might also make sense, as long as a problem in the ECC tech doesn't somehow propagate to all copies, making them useless.

@VolatileCable
Copy link

VolatileCable commented Aug 10, 2020

Just a heads-up for anyone considering using .par2 files, please don't use the outdated awful par2cmdline client that ships in many distros, but use parpar, which is significantly faster.

500MiB test file
multipar 19.76s 85-90% cpu
par2cmd  52.49s 92% cpu
par2cmd  49.42s (with 128MiB instead of default 16MiB mem)
parpar   11.22s

par2cmdline also had various other bugs and shortcomings in my testing. The only downside for parpar is that it only creates parity files, but can't verify or repair them.

@tarruda
Copy link

tarruda commented Aug 13, 2020

No, assuming the corruption of 1.A did not affect meta-data in a way rendering it unusable, the approach as I planned it out, should I face that situation, would be:

It should be easy to write a wrapper script that does the repairing automatically by using this solution. I will integrate it in the wrapper I already use for borg, thanks @elho.

@dumblob
Copy link

dumblob commented Aug 13, 2020

@tarruda would you have a link to your wrapper? Thanks!

@tarruda
Copy link

tarruda commented Aug 14, 2020

@dumblob My wrapper has quite a bit more functionality than data recovery (which I still haven't implemented BTW). This is a summary of what my wrapper (~= 350 LOC python script) does:

  • Mostly configuration driven (I use python's own configparser module).
  • Allows maintaining multiple repositories and archives.
  • Integrates with rclone. Personally I use this to store repositories in dropbox/onedrive/gdrive.
  • Manages repository keys, ssh keys and rclone configuration encryption. I do this because I don't like using secret-tool (gnome keyring) to cache passphrase, as it becomes easy to see from any process ran as the current desktop user. Instead I use a combination of systemd-ask-password and keyctl to cache the passphrase in the root user keyring. I also use this passphrase to encrypt the ssh key (one of my repositories is stored remotely via ssh) and rclone config (it contains credentials for my dropbox account). This setup allows me to type the passphrase once every 24h (the expiry time I set on keyctl) and reuse the same passphrase for ssh/rclone/borg.

I haven't published because I designed it with my own use of borg in mind, which is probably more complex than what most users need. Still, if you think it would be useful (as I said, I haven't implemented @elho's suggestion yet) I can create a project on github and publish it to pypi.

@dumblob
Copy link

dumblob commented Aug 14, 2020

Most of the points apply to my use case as well. So, if you don't mind, feel free to create a repo (pypi record would be then an icing on the cake). I'll consider adjusting (or rewriting) it to work on Windows as well.

@tarruda
Copy link

tarruda commented Aug 14, 2020

I should be able to do this later today, I will post a link here when it is ready.

@tarruda
Copy link

tarruda commented Aug 14, 2020

@dumblob https://github.com/tarruda/syborg

I might implement the recovery wrapper suggested by @elho next week.

@dumblob
Copy link

dumblob commented Jul 12, 2021

@tarruda did you manage to write some FEC code for Borg? I couldn't find it (did you forget to push some downstream branches?).

@G2G2G2G
Copy link

G2G2G2G commented Jan 23, 2022

https://github.com/rfjakob/cshatag & https://packages.debian.org/stretch/shatag etc
while bitrot isn't really an issue, especially since todays hdds pretty much prevent it, it's extremely easy to check/watch for and always has been.

@dumblob
Copy link

dumblob commented Jan 23, 2022

while bitrot isn't really an issue, especially since todays hdds pretty much prevent it

Even if I strongly disagree with this bold & broad (and evidence-based incorrect) statement, could we at least agree on the fact that the severity (i.e. the real cost if it happens) of bitrot is extremely high even if its incidence rate is low?

If so, could we at least push very hard for minimizing such damage to as few data as possible (considering worst case - i.e. when bitrot appeared in the most important data - e.g. some central metadata about the structure of our backups)?

@enkore
Copy link
Contributor

enkore commented Jan 24, 2022

https://github.com/rfjakob/cshatag & https://packages.debian.org/stretch/shatag etc

That's only checksumming - Borg has multiple layers of those already.

@G2G2G2G
Copy link

G2G2G2G commented Jan 25, 2022

I'm aware, and it still doesn't detect bitrot, which is the problem. I was showing how insanely easy it has been for decades.

@ThomasWaldmann
Copy link
Member Author

#6584 is much easier / simpler, but might often solve the same problem.

@Sepero
Copy link

Sepero commented Nov 25, 2023

I'm currently in the process of finding a new backup solution right now. Lack of Parity is the one thing preventing me from setting up Borg immediately. Bup appears to be the only backup program that supports parity.

Recently I had a backup drive fail, leaving me with only 1 backup drive. I'm still in process of getting the second backup re-established. But in the meantime, I feel uneasy. I have to place full trust and confidence in the resilience of one backup drive. 1 backup is good, but 1 backup with a little parity is way Way Better. As Terabytes of backup data increases, the probability of a bad bit or corrupt sector increases.

A file with corrupted 65536 bytes can be restored with 1) A little parity OR 2) Another full copy

I'm inclined to think having a little parity AND another full copy is a pretty good option..

@DavidOliver
Copy link

@Sepero, in case it's of interest, Duplicacy offers erasure coding.

@ThomasWaldmann
Copy link
Member Author

ThomasWaldmann commented Nov 26, 2023

It's not just the backup tool having feature X (like parity, ECC, erasure encoding, etc.), but also whether it really helps in practice.

If you use USB disk(s), you really should have multiple ones and rotate them - then you don't need stuff like ECC because you have N-times redundancy anyway. If you backup to some remote server, just have multiple independent remote servers. Or combine local and remote.

That will help you if one backup media goes away (dies due to age, dropping it, you lose it / it gets stolen, a crypto trojan encrypts it, server issues, provider gone, ...) - ECC would not help you at all with that.

ECC also does not help you for any case that goes beyond what it was designed for - e.g. if there is just a bit more corruption than it can deal with. This can be a real problem if there is no control over data distribution on the media (like e.g. you can't know where a flash controller will put your data in a flash chip, what's close to each other and what's not).

BTW, HDD and SSD controllers usually internally already use ECC codes for whatever they are useful for.

@Sepero
Copy link

Sepero commented Nov 26, 2023

An additional resource that may be useful for devs

Backblaze Open-sources Reed-Solomon Erasure Coding Source Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests