Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge locks into one #3904

Merged
merged 2 commits into from
Feb 25, 2022
Merged

Conversation

darionyaphet
Copy link
Contributor

@darionyaphet darionyaphet commented Feb 16, 2022

What type of PR is this?

  • bug
  • feature
  • enhancement

What problem(s) does this PR solve?

Issue(s) number:

Description:

Using a larger lock to control concurrent operation.

How do you solve it?

Special notes for your reviewer, ex. impact of this fix, design document, etc:

Checklist:

Tests:

  • Unit test(positive and negative cases)
  • Function test
  • Performance test
  • N/A

Affects:

  • Documentation affected (Please add the label if documentation needs to be modified.)
  • Incompatibility (If it breaks the compatibility, please describe it and add the label.)
  • If it's needed to cherry-pick (If cherry-pick to some branches is required, please label the destination version(s).)
  • Performance impacted: Consumes more CPU/Memory

Release notes:

Please confirm whether to be reflected in release notes and how to describe:

ex. Fixed the bug .....

@darionyaphet darionyaphet added the ready-for-testing PR: ready for the CI test label Feb 16, 2022
@darionyaphet darionyaphet force-pushed the replace-lock branch 4 times, most recently from c764a72 to 1a1450b Compare February 21, 2022 09:31
@darionyaphet darionyaphet requested review from liuyu85cn and a team February 21, 2022 09:35
Copy link
Contributor

@critical27 critical27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could do some test about the impact of latency. Especially about the latency.

@@ -32,7 +32,7 @@ void HBProcessor::process(const cpp2::HBReq& req) {
auto role = req.get_role();
LOG(INFO) << "Receive heartbeat from " << host
<< ", role = " << apache::thrift::util::enumNameSafe(role);

folly::SharedMutex::WriteHolder holder(LockUtils::lock());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since heartbeat's num is way more than other meta processor, do we really have to add write lock here? What we need to write here is only related to one host.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly to prevent inconsistencies when reading data, this writing method is more strict.

@darionyaphet
Copy link
Contributor Author

You could do some test about the impact of latency. Especially about the latency.

Because it was a segmented lock structure before, and now it is modified to a large lock, the increase in delay is mainly in the position of competing locks. So what actually needs to be evaluated is the impact of multiple requests competing for locks.

liuyu85cn
liuyu85cn previously approved these changes Feb 24, 2022
Copy link
Contributor

@liuyu85cn liuyu85cn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good job, LGTM

liuyu85cn
liuyu85cn previously approved these changes Feb 24, 2022
@@ -31,7 +31,7 @@ void CreateSnapshotProcessor::process(const cpp2::CreateSnapshotReq&) {
}

auto snapshot = folly::sformat("SNAPSHOT_{}", MetaKeyUtils::genTimestampStr());
folly::SharedMutex::WriteHolder wHolder(LockUtils::snapshotLock());
folly::SharedMutex::WriteHolder holder(LockUtils::lock());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ask a question:
create snapshot and create backup is a time-consuming operation. This lock will be held here for a long time.
During this period, other heartbeats may fail?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I don't know the time-consuming in the case of a large amount of data.

@Sophie-Xie Sophie-Xie linked an issue Feb 25, 2022 that may be closed by this pull request
panda-sheep
panda-sheep previously approved these changes Feb 25, 2022
Copy link
Contributor

@panda-sheep panda-sheep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job

liuyu85cn
liuyu85cn previously approved these changes Feb 25, 2022
Copy link
Contributor

@critical27 critical27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job

@critical27 critical27 merged commit 43f2131 into vesoft-inc:master Feb 25, 2022
@darionyaphet darionyaphet deleted the replace-lock branch February 25, 2022 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-testing PR: ready for the CI test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replace all locks in meta with a single lock
5 participants