Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix](group commit) Pick make group commit cancel in time (#36249) #37404

Merged
merged 2 commits into from
Jul 9, 2024

Conversation

mymeiyi
Copy link
Contributor

@mymeiyi mymeiyi commented Jul 7, 2024

pick #36249

## Proposed changes

If group commit time interval is larger than the load timeout, and there
is no new client load to reuse the internal group commit load, the group
commit can not cancel in time because it stuck in wait:
```
#0  0x00007f33937a47aa in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00005651105dbd05 in __gthread_cond_timedwait(pthread_cond_t*, pthread_mutex_t*, timespec const*) ()
#2  0x000056511063f385 in std::__condvar::wait_until(std::mutex&, timespec&) ()
#3  0x000056511063dc2e in std::cv_status std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > >(std::unique_lock<std::mutex>&, std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > > const&) ()
#4  0x000056511063cedf in std::cv_status std::condition_variable::wait_until<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >(std::unique_lock<std::mutex>&, std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > > const&) ()
#5  0x0000565110824f48 in std::cv_status std::condition_variable::wait_for<long, std::ratio<1l, 1000l> >(std::unique_lock<std::mutex>&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&) ()
#6  0x0000565113b5612a in doris::LoadBlockQueue::get_block(doris::RuntimeState*, doris::vectorized::Block*, bool*, bool*) ()
#7  0x000056513f900941 in doris::pipeline::GroupCommitOperatorX::get_block(doris::RuntimeState*, doris::vectorized::Block*, bool*) ()
#8  0x000056513c69c0b6 in doris::pipeline::ScanOperatorX<doris::pipeline::GroupCommitLocalState>::get_block_after_projects(doris::RuntimeState*, doris::vectorized::Block*, bool*) ()
#9  0x000056514009d5f1 in doris::pipeline::PipelineTask::execute(bool*) ()
#10 0x00005651400fb24a in doris::pipeline::TaskScheduler::_do_work(unsigned long) ()
```
@mymeiyi
Copy link
Contributor Author

mymeiyi commented Jul 7, 2024

run buildall

Copy link
Contributor

github-actions bot commented Jul 7, 2024

clang-tidy review says "All clean, LGTM! 👍"

@doris-robot
Copy link

TeamCity be ut coverage result:
Function Coverage: 36.33% (9151/25191)
Line Coverage: 27.87% (74709/268082)
Region Coverage: 26.75% (38523/144019)
Branch Coverage: 23.45% (19519/83244)
Coverage Report: http://coverage.selectdb-in.cc/coverage/33bfbf06c616e27c60eab708f43da4068b5e818b_33bfbf06c616e27c60eab708f43da4068b5e818b/report/index.html

@mymeiyi
Copy link
Contributor Author

mymeiyi commented Jul 8, 2024

run buildall

Copy link
Contributor

github-actions bot commented Jul 8, 2024

clang-tidy review says "All clean, LGTM! 👍"

@doris-robot
Copy link

TeamCity be ut coverage result:
Function Coverage: 36.32% (9149/25191)
Line Coverage: 27.86% (74681/268084)
Region Coverage: 26.73% (38503/144018)
Branch Coverage: 23.43% (19508/83244)
Coverage Report: http://coverage.selectdb-in.cc/coverage/7fd22c5faba6230c788f5b80f5637a7fc3324149_7fd22c5faba6230c788f5b80f5637a7fc3324149/report/index.html

@mymeiyi
Copy link
Contributor Author

mymeiyi commented Jul 8, 2024

run p0

@dataroaring dataroaring merged commit 1e3ab0f into apache:branch-2.1 Jul 9, 2024
19 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants