-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge locks into one #3904
Merge locks into one #3904
Conversation
c764a72
to
1a1450b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could do some test about the impact of latency. Especially about the latency.
@@ -32,7 +32,7 @@ void HBProcessor::process(const cpp2::HBReq& req) { | |||
auto role = req.get_role(); | |||
LOG(INFO) << "Receive heartbeat from " << host | |||
<< ", role = " << apache::thrift::util::enumNameSafe(role); | |||
|
|||
folly::SharedMutex::WriteHolder holder(LockUtils::lock()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since heartbeat's num is way more than other meta processor, do we really have to add write lock here? What we need to write here is only related to one host.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly to prevent inconsistencies when reading data, this writing method is more strict.
Because it was a segmented lock structure before, and now it is modified to a large lock, the increase in delay is mainly in the position of competing locks. So what actually needs to be evaluated is the impact of multiple requests competing for locks. |
1a1450b
to
6c47886
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good job, LGTM
e2f525c
to
6c47886
Compare
@@ -31,7 +31,7 @@ void CreateSnapshotProcessor::process(const cpp2::CreateSnapshotReq&) { | |||
} | |||
|
|||
auto snapshot = folly::sformat("SNAPSHOT_{}", MetaKeyUtils::genTimestampStr()); | |||
folly::SharedMutex::WriteHolder wHolder(LockUtils::snapshotLock()); | |||
folly::SharedMutex::WriteHolder holder(LockUtils::lock()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ask a question:
create snapshot and create backup is a time-consuming operation. This lock will be held here for a long time.
During this period, other heartbeats may fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I don't know the time-consuming in the case of a large amount of data.
fc701d3
to
8ef4b4b
Compare
8ef4b4b
to
392ce8a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job
2041892
392ce8a
to
2041892
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job
What type of PR is this?
What problem(s) does this PR solve?
Issue(s) number:
Description:
Using a larger lock to control concurrent operation.
How do you solve it?
Special notes for your reviewer, ex. impact of this fix, design document, etc:
Checklist:
Tests:
Affects:
Release notes:
Please confirm whether to be reflected in release notes and how to describe: