Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug that mpp_task is not move to cancel thread due to llvm SOO #5637

Merged
merged 6 commits into from
Aug 17, 2022

Conversation

windtalker
Copy link
Contributor

@windtalker windtalker commented Aug 17, 2022

What problem does this PR solve?

Issue Number: ref #5095, close #5638

Problem Summary:

What is changed and how it works?

After #5361, in MPPTaskManager::cancelMPPQuery, we try to move mpp task to the cancel thread so each mpp task can be destructed parallelly instead of be destructed one by one in cancelMPPQuery thread.
However, due to SOO in llvm(llvm/llvm-project#32472), this move actaully failed, and there is a chance that mpp task is destructed inside the follow cancel loop:

for (auto it = task_set->task_map.begin(); it != task_set->task_map.end();)
{
fmt_buf.fmtAppend("{} ", it->first.toString());
auto current_task = it->second;
it = task_set->task_map.erase(it);
thread_manager->schedule(false, "CancelMPPTask", [task = std::move(current_task), &reason] { task->cancel(reason); });
}

Destruct MPP inside the cancel loop means the task will be destructed before all mpp tasks are cancelled, this may cause some dead lock issues since there are some dependency between mpp tasks(especially in the case of local tunnel).

This pr fix this issue by

  • Use a Functor instead of lambda as a workaround of the SOO issues
  • Save the moved Functor inside a vector to make sure mpp task never be destructed inside cancel loop, even if Functor move fails due to some unknown issues.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
    run some random failpoint tests
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

None

Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
@ti-chi-bot
Copy link
Member

ti-chi-bot commented Aug 17, 2022

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • gengliqi
  • yibin87

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@ti-chi-bot ti-chi-bot added release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 17, 2022
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
dbms/src/Flash/Mpp/MPPTaskManager.cpp Outdated Show resolved Hide resolved
@ti-chi-bot ti-chi-bot added the status/LGT1 Indicates that a PR has LGTM 1. label Aug 17, 2022
Co-authored-by: Liqi Geng <gengliqiii@gmail.com>
// can be moved to cancel thread. Meanwhile, also save the moved wrap in a vector to guarantee that even if cancel functors
// fail to move due to some other issues, it still does not destruct inside the loop
cancel_functors.push_back(MPPTaskCancelFunctor(std::move(current_task), reason));
thread_manager->schedule(false, "CancelMPPTask", std::move(cancel_functors[cancel_functors.size() - 1]));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, take a consideration if any exception thrown during the for loop.

Copy link
Contributor

@yibin87 yibin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others LGTM.

@ti-chi-bot ti-chi-bot added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Aug 17, 2022
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
@windtalker
Copy link
Contributor Author

/merge

@ti-chi-bot
Copy link
Member

@windtalker: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: 98fcda0

@ti-chi-bot ti-chi-bot added the status/can-merge Indicates a PR has been approved by a committer. label Aug 17, 2022
@windtalker
Copy link
Contributor Author

/run-integration-test

@sre-bot
Copy link
Collaborator

sre-bot commented Aug 17, 2022

Coverage for changed files

Filename                                         Regions    Missed Regions     Cover   Functions  Missed Functions  Executed       Lines      Missed Lines     Cover    Branches   Missed Branches     Cover
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DataStreams/CreatingSetsBlockInputStream.cpp         211                76    63.98%          11                 2    81.82%         226                68    69.91%         110                49    55.45%
Flash/Mpp/MPPTaskManager.cpp                         176               135    23.30%          18                10    44.44%         206               124    39.81%          78                57    26.92%
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TOTAL                                                387               211    45.48%          29                12    58.62%         432               192    55.56%         188               106    43.62%

Coverage summary

Functions  MissedFunctions  Executed  Lines   MissedLines  Cover
18378      8366             54.48%    211602  86465        59.14%

full coverage report (for internal network access only)

@ti-chi-bot ti-chi-bot merged commit 0933d34 into pingcap:master Aug 17, 2022
@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #5641.

public:
MPPTaskPtr task;
String reason;
MPPTaskCancelFunctor(const MPPTaskCancelFunctor & other)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not declare it as delete?

@@ -97,14 +125,32 @@ void MPPTaskManager::cancelMPPQuery(UInt64 query_id, const String & reason)
LOG_WARNING(log, fmt::format("Begin cancel query: {}", query_id));
FmtBuffer fmt_buf;
fmt_buf.fmtAppend("Remaining task in query {} are: ", query_id);
std::vector<std::function<void()>> cancel_functors;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

guess we don't actually need this vector?

// remaining in the current scope, as a workaround we add a wrap(MPPTaskCancelFunctor) here to make sure `current_task`
// can be moved to cancel thread. Meanwhile, also save the moved wrap in a vector to guarantee that even if cancel functors
// fail to move due to some other issues, it still does not destruct inside the loop
cancel_functors.push_back(MPPTaskCancelFunctor(std::move(current_task), reason));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depend on the behavior of std::function::move is risky and hard to understand.

how about this?

thread_manager->schedule(false, "CancelMPPTask", 
    [functor = new MPPTaskCancelFunctor(std::move(current_task), reason)] {
        std::unique_ptr(functor)->run(); // do not need `operator()`, just `run()`
});

IMO it's more stable than depending on some specific implementation of std::function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, since this pr is merged, I will use another pr to refine it.

Comment on lines +147 to 153
catch (...)
{
thread_manager->wait();
throw;
}

thread_manager->wait();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually a finally:

finally {
    thread_manager->wait();
}

@windtalker windtalker deleted the soo_fix branch January 30, 2023 12:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-cherry-pick-release-6.2 release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. status/can-merge Indicates a PR has been approved by a committer. status/LGT2 Indicates that a PR has LGTM 2.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

mpptask is not moved to cancel thread as expected in MPPTaskManager::cancelMPPQuery
7 participants