Skip to content

Commit b29dca4

Browse files
adam900710kdave
authored andcommitted
btrfs: scrub: support subpage data scrub
Btrfs scrub is more flexible than buffered data write path, as we can read an unaligned subpage data into page offset 0. This ability makes subpage support much easier, we just need to check each scrub_page::page_len and ensure we only calculate hash for [0, page_len) of a page. There is a small thing to notice: for subpage case, we still do sector by sector scrub. This means we will submit a read bio for each sector to scrub, resulting in the same amount of read bios, just like on the 4K page systems. This behavior can be considered as a good thing, if we want everything to be the same as 4K page systems. But this also means, we're wasting the possibility to submit larger bio using 64K page size. This is another problem to consider in the future. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
1 parent 53f3251 commit b29dca4

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

fs/btrfs/scrub.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1795,11 +1795,15 @@ static int scrub_checksum_data(struct scrub_block *sblock)
17951795

17961796
shash->tfm = fs_info->csum_shash;
17971797
crypto_shash_init(shash);
1798-
crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum);
17991798

1800-
if (memcmp(csum, spage->csum, sctx->fs_info->csum_size))
1801-
sblock->checksum_error = 1;
1799+
/*
1800+
* In scrub_pages() and scrub_pages_for_parity() we ensure each spage
1801+
* only contains one sector of data.
1802+
*/
1803+
crypto_shash_digest(shash, kaddr, fs_info->sectorsize, csum);
18021804

1805+
if (memcmp(csum, spage->csum, fs_info->csum_size))
1806+
sblock->checksum_error = 1;
18031807
return sblock->checksum_error;
18041808
}
18051809

0 commit comments

Comments
 (0)