Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: update BlobPrefetchRequest fields type from u32 to u64 #349

Merged
merged 3 commits into from
Mar 25, 2022

Conversation

MercyMM
Copy link

@MercyMM MercyMM commented Mar 24, 2022

Blobfs relies on the BlobPrefetchRequest structure that field of "offset"
and "len" are both u32, the maximum support is 4G. But the maximum blob file
will exceed 4G, so change the define of "offset" and "len" to u64.

if size > 0 {
return Err(einval!(format!(
"load_chunks_on_demand: blob_id {:?}, size: {:?} is less than 0",
blob_id, size
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it supposed to be if size < 0?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a mistakes. I will fix it

(size - offset as i64)
))
})?;
if size < offset {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about if size == offset?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the BlobPrefetchRequest.len will be 0, storage mod should handle it correctly, let me check to make sure.

})?;
if size < offset {
return Err(einval!(format!(
"blobfs: blob {:?} offset {:?} is larger than size {:?}",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add "load_chunks_on_demand" in the error message to make it easier to locate where goes wrong?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

std::cmp::min(size as u32, u32::MAX - offset as u32),
);
} else {
let _ = cache.reader().prefetch_blob_data_range(offset, size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By design the prefetch table is supposed to have (offset, size) < u32::MAX.
Can we leave a warn! or info! here about (offset, size) is > u32::MAX?

Copy link
Author

@MercyMM MercyMM Mar 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this patch, u64 is now by design. This does not affect default behavior. If it does not cause potential bugs, I feel that warn! is useless.

@MercyMM MercyMM force-pushed the master branch 2 times, most recently from f1abc4a to 7cf52c0 Compare March 25, 2022 03:52
@eryugey
Copy link
Contributor

eryugey commented Mar 25, 2022

LGTM!

@eryugey
Copy link
Contributor

eryugey commented Mar 25, 2022

Besides there're clippy warnings that should be fixed.

gexuyang added 3 commits March 25, 2022 14:07
In BlobMetaState.get_chunk_index_nocheck, if addr == self.chunks
[last].compressed_offset, it will return error. This patch add error log for
it

Signed-off-by: gexuyang <gexuyang@linux.alibaba.com>
Blobfs relies on the BlobPrefetchRequest structure that field of "offset"
and "len" are both u32, the maximum support is 4G. But the maximum blob file
will exceed 4G, so change the define of "offset" and "len" to u64.

Signed-off-by: gexuyang <gexuyang@linux.alibaba.com>
delete unused debug_assert, and delete unused BlobFs.cfg.

Signed-off-by: gexuyang <gexuyang@linux.alibaba.com>
@liubogithub liubogithub merged commit ea0234f into dragonflyoss:master Mar 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants