Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destination much bigger than source with btrfs and sxbackup 0.6.10 #37

Closed
dan-damian opened this issue Sep 20, 2016 · 10 comments
Closed
Labels

Comments

@dan-damian
Copy link

Hello,

I've just setup a backup using sxbackup 0.6.10 over btrfs on two Ubuntu 14.0.4 machines (source and destination).

My source has a size of 769 GB reported by df -h, and the destination is 1.1 T also reported by df -h. Those two machines have nothing more than the OS and data that I want to backup. So if we remove 5-6 GB for the OS from both machines we still have a huge difference of 340 GB. The data specific is that we are storing millions of small xml files (serveral KBs one).
Is this difference something normal or something went bad during the first backup.
Also, the second and third incremental backup went well.

Kind regards,

Dan

@masc3d
Copy link
Owner

masc3d commented Sep 20, 2016

btrfs metadata takes its toll, but this seems a bit excessive.
df is not very expressive for btrfs.
Please post btrfs filesystem usage and btrfs qgroup show for the source and destination volumes.

You need to enable quota for the latter, in case you haven't done so far.

@dan-damian
Copy link
Author

dan-damian commented Sep 21, 2016

Hi masc3d! Thank you for your fast response!
I've done what you asked, but it seems that btrfs filesystem usage goes by 'usage' unknow token, and i've used btrfs filesystem df.
Also noticed that there are different versions of sxbackup script on source(0.5.9) and destination (0.6.10).

#Destination
btrfs-sxbackup==0.6.10

#--------------

sysadmin@destination:/mnt/btrfs-root$ btrfs filesystem usage
: unknown token 'usage'

sysadmin@destination:/mnt/btrfs-root$ sudo btrfs filesystem df /mnt/btrfs-root/
Data, single: total=921.01GiB, used=890.54GiB
System, single: total=4.00MiB, used=144.00KiB
Metadata, single: total=212.01GiB, used=208.64GiB
unknown, single: total=512.00MiB, used=0.00

#--------------
drwxr-xr-x 1 root root 218 Sep 21 00:04 ./
drwxr-xr-x 1 root root 20 Sep 8 16:34 ../
-rw-r--r-- 1 root root 168 Sep 8 16:54 .btrfs-sxbackup
drwxr-xr-x 1 root root 166 Sep 8 16:19 @/
drwxr-xr-x 1 root root 16 Sep 8 16:23 @home/
drwxr-xr-x 1 root root 208 Sep 16 03:33 sx-20160912-151513-utc/
drwxr-xr-x 1 root root 208 Sep 16 22:14 sx-20160916-061504-utc/
drwxr-xr-x 1 root root 208 Sep 20 01:42 sx-20160919-200003-utc/
drwxr-xr-x 1 root root 208 Sep 21 00:04 sx-20160920-200005-utc/
sysadmin@destination:/mnt/btrfs-root$ sudo btrfs qgroup show -pcreFf sx-20160912-151513-utc/
ERROR: can't perform the search - No such file or directory
ERROR: can't list qgroups: No such file or directory
sysadmin@destination:/mnt/btrfs-root$

#Source

btrfs-sxbackup==0.5.9

#-----------------
sysadmin@source:/appdata/.sxbackup$ btrfs filesystem usage
: unknown token 'usage'

sysadmin@source:/appdata$ sudo btrfs filesystem df /appdata
Data, single: total=662.01GiB, used=556.89GiB
System, single: total=4.00MiB, used=112.00KiB
Metadata, single: total=221.01GiB, used=216.95GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

#-----------------
drwxr-xr-x 1 root root 206 Sep 21 00:00 ./
drwxr-xr-x 1 root root 226 Sep 20 22:56 ../
-rw-r--r-- 1 root root 68 Sep 8 16:51 .btrfs-sxbackup
drwxr-xr-x 1 root root 226 Sep 12 18:11 sx-20160912-151513-utc/
drwxr-xr-x 1 root root 226 Sep 16 09:11 sx-20160916-061504-utc/
drwxr-xr-x 1 root root 226 Sep 19 22:56 sx-20160919-200003-utc/
drwxr-xr-x 1 root root 226 Sep 20 22:56 sx-20160920-200005-utc/
sysadmin@source:/appdata/.sxbackup$ sudo btrfs qgroup show -pcreFf sx-20160912-151513-utc/
ERROR: can't perform the search - No such file or directory
ERROR: can't list qgroups: No such file or directory

@masc3d
Copy link
Owner

masc3d commented Sep 22, 2016

that's not much of an issue, btrfs-sxbackup is not necessarily required or invoked on the remote side.
0.6.8 had issue #32, but in this case you'd probably see much larger snapshots.
You should be fine with both 0.6.9 and 0.6.10.

that's really some large amount of metadata there. as a comparison here's my stats for a volume hosting backups for 2 linux systems:

Data, single: total=2.43TiB, used=2.41TiB
System, DUP: total=32.00MiB, used=336.00KiB
Metadata, DUP: total=11.00GiB, used=8.66GiB

you could use this to list the size of individual destination snapshots and post the output here.

@masc3d
Copy link
Owner

masc3d commented Sep 22, 2016

I would also recommend to mount with compress=zlib on the destination to save some space.

@dan-damian
Copy link
Author

dan-damian commented Sep 22, 2016

On the source in /etc/fstab I have:
UUID=7d3ec665-29ed-4052-8884-5bc1312eb6c8 /appdata btrfs defaults,noatime,autodefrag,space_cache,compress-force=lzo,subvol=@appdata 0 3
so, it uses compress-force=lso, but I didn't thought that on the destination, I should specify also a compression mechanism.

On the destination in /etc/fstab I have:
/dev/xvda1 /mnt/btrfs-root btrfs defaults 0 2

and the volume @appdata is copied here /mnt/btrfs-root. Might this be the reason for the difference? The fact that /mnt/btrfs-root was mounted without compression...
Would it be ok If I remount it with compression activated, now, after I've already copied some volumes from the source?

@masc3d
Copy link
Owner

masc3d commented Sep 22, 2016

yes, this is most certainly the cause.
you can use zlib on the backup/destination server for better compression (than lzo).

You can activate it now, that's ok.
In order to compress the existing data you will have to run btrfs fi defragment -r -czlib <subvol_path>

As a sidenote, I would rather use compress instead of compress-force on the source for efficiency.

@dan-damian
Copy link
Author

dan-damian commented Sep 23, 2016

Taking into account the compress vs compress-force advice.
Ok, now I have several incremental snapshots already:
drwxr-xr-x 1 root root 208 Sep 16 03:33 sx-20160912-151513-utc/
drwxr-xr-x 1 root root 208 Sep 16 22:14 sx-20160916-061504-utc/
drwxr-xr-x 1 root root 208 Sep 20 01:42 sx-20160919-200003-utc/
drwxr-xr-x 1 root root 208 Sep 21 00:04 sx-20160920-200005-utc/
drwxr-xr-x 1 root root 208 Sep 21 23:41 sx-20160921-200003-utc/

I've started the compression on the last one:
sudo btrfs fi defragment -r -czlib sx-20160922-160004-utc &

After the compression will be ready (I think it will take a while...), I suppose it's ok to delete the older subvolumes, correct?
I suppose also, that by compressing the last snapshot, the older snapshots will be compressed as well, indirectly. Is this assumption correct?

@masc3d
Copy link
Owner

masc3d commented Sep 23, 2016

I believe you could just run btrfs fi defragment -r -czlib /mnt/btrfs-root/ to make sure everything is compressed, no need to delete anything.

old snapshots will be removed on every run according to the retention settings of the backup job, but you can also cleanup manually using purge, which makes sure the last snapshot stays in place in order to prevent a full snapshot being transmitted again.

@dan-damian
Copy link
Author

I've deleted older snapshots in /mnt/btrfs-root and now I only have:
drwxr-xr-x 1 root root 86 Sep 26 10:27 ./
drwxr-xr-x 1 root root 20 Sep 8 16:34 ../
-rw-r--r-- 1 root root 168 Sep 8 16:54 .btrfs-sxbackup
drwxr-xr-x 1 root root 166 Sep 8 16:19 @/
drwxr-xr-x 1 root root 16 Sep 8 16:23 @home/
drwxr-xr-x 1 root root 208 Sep 22 19:40 sx-20160922-160004-utc/

After I've run:
sudo btrfs fi defragment -r -czlib /mnt/btrfs-root/sx-20160922-160004-utc
it tooked serveral hours, my df -h looks the same:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 1.6T 1.1T 449G 72% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 9.8G 4.0K 9.8G 1% /dev
tmpfs 2.0G 428K 2.0G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 9.9G 0 9.9G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvda1 1.6T 1.1T 449G 72% /home
/dev/xvda1 1.6T 1.1T 449G 72% /mnt/btrfs-root

Should I mount the partition with compress=zlib option in /etc/fstab and restart the machine?

@masc3d
Copy link
Owner

masc3d commented Sep 27, 2016

yes, makes sense. maybe you also need a re-balance.
free space is quite complicated with btrfs, have a read here eg

@masc3d masc3d closed this as completed Oct 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants