Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce async framework and clean up prefetch related logic #467

Merged
merged 9 commits into from
Jun 10, 2022

Conversation

jiangliu
Copy link
Collaborator

@jiangliu jiangliu commented Jun 5, 2022

The main change is to build an async framework for storage/worker by using tokio Notify and current thread runtime.
Also clean up data prefetch related logic, those are left over from legacy code.

@jiangliu jiangliu requested review from hsiangkao, imeoer, changweige and luodw and removed request for hsiangkao June 5, 2022 15:50
@jiangliu jiangliu changed the title Create async framework and clean up prefetch related logic Introduce async framework and clean up prefetch related logic Jun 5, 2022
@jiangliu jiangliu force-pushed the prefetch branch 3 times, most recently from af281a1 to 443baae Compare June 6, 2022 08:52
jiangliu added 6 commits June 7, 2022 17:23
Currently AsyncWorkerMgr is implemented by using synchronous multi-threading,
which has some limitations. So let's build async framework for AsyncWorkerMgr
by:
1) Implement unbound multi-producer multi-consumer (mpmc) channel by using
tokio::sync::Notify
2) Use tokio current thread Runtime instead of the default multi-threaded
Runtime. This will help to support io-uring in future.
3) Use work-stealing mode to dispatch tasks.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Rename AsyncRequestState to AsyncPrefetchState.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Move network bandwidth rate limiter from AsyncWorkerMgr::send() to
AsyncWorkerMgr::run().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Add test case for network bandwidth rate limiter.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
1) group current function as prefetch related
2) centralize rate limit for prefetch requests
3) prepare to support other non-prefetch request messages

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Syntax only refinements to fscache, there should be no functional
changes.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Copy link
Collaborator

@liubogithub liubogithub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please elaborate why mpmc is needed here?

I see that mpsc might be used to serve two async workers, but why do we need two workers if it's async already?

@jiangliu
Copy link
Collaborator Author

jiangliu commented Jun 9, 2022

can you please elaborate why mpmc is needed here?

I see that mpsc might be used to serve two async workers, but why do we need two workers if it's async already?

Currently we have enabled the async runtime framework, but most internal implementation is still in sync mode.
So we still need multi-threading here before we have convert the whole stack to async mode.

jiangliu added 3 commits June 9, 2022 17:47
The data prefetch related code has been refactored several times,
and it becomes a little complex. So go over code again to remove some
legacy code.

The most important change is to use reference count to track prefetch
status instead of an on/off flag. It gets messy with the on/off flag
when dealing with blob sharing/reuse.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Add a picture for data prefetch architure.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Flush pending prefetch requests when closing fscache fd, otherwise the
blob gc will be blocked.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
@liubogithub liubogithub merged commit eca93b5 into dragonflyoss:master Jun 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants