-
Notifications
You must be signed in to change notification settings - Fork 809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disconnect dangling pollers on membership lost #6272
Conversation
When we Stop() TaskListManager we currently don't do anything with pollers. That's why long pollers are still waiting for the tasks and this could cause a significant delay (1m) on schedule-to-start.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files
... and 14 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
I'm going to add tests for 2 other pollers, they'll be very similar - just need to catch when we're forwarding request. |
service/matching/tasklist/matcher.go
Outdated
@@ -501,6 +528,9 @@ func (tm *TaskMatcher) pollOrForward( | |||
EventName: "Poll Timeout", | |||
}) | |||
return nil, ErrNoTasks | |||
case <-tm.cancelCtx.Done(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can chain the contexts including cancelCtx
before forwarding polls at line
tm.fwdr.ForwardPoll(ctx)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did chaining trick with ctxutil.WithPropagatedContextCancel
.
I think now the semantics we-dont-care-if-matcher-closes-or-client-disconnects
is more explicit.
@@ -579,6 +563,18 @@ func (t *MatcherTestSuite) TestIsolationMustOfferRemoteMatch() { | |||
t.Equal(t.taskList.Parent(20), req.GetTaskList().GetName()) | |||
} | |||
|
|||
func (t *MatcherTestSuite) TestPollersDisconnectedAfterDisconnectBlockedPollers() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to also simulate the scenario where tasklist ownership changes and this change reduces task latencies by preventing hanging polls. Simulation framework currently doesn't support such ownership change but should be straightforward to introduce
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe. But it's also super-easy to reproduce locally + this is the case production falls all the time because of the clients polling immediately disconnecting from exited instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Manual local testing is good if you know what you are doing but having it defined as another simulation framework would be preferred. We will run those simulation scenarios as part of CI and ensure features/improvements like this are not broken going forward.
Not a blocker for this PR. Let's add it when we have cycles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a draft, I can't fault anything, lgtm
After all, we want to get no-tasks from the matcher
var wg sync.WaitGroup | ||
wg.Add(1) | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd avoid this extra goroutine and put this logic inside returned func callback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand you - how the chaining would work then?
We need to make sure cancelling the dependant (parent) context when cancelCtx
is cancelled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking closely I couldn't see a way to achieve this without extra goroutine which helps propagate cancelCtx.Done
.
var wg sync.WaitGroup | ||
wg.Add(1) | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking closely I couldn't see a way to achieve this without extra goroutine which helps propagate cancelCtx.Done
.
When we Stop() TaskListManager we currently don't do anything with
pollers. That's why long pollers are still waiting for the tasks
and this could cause a significant delay (1m) on schedule-to-start.
When TaskListManager shuts down we cancel long polling poller' requests.
To avoid 1m spike on schedule-to-start when cadence-matching is restarted.
Unit-test
cadence-client will observe more empty tasks.
If you previously seen 1m schedule-to-start every time you restart cadence-matching, this should be fixed now.
Documentation Changes