-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only start SQL thread temporarily to WaitForPosition if needed #10104
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
3 tasks
mattlord
force-pushed
the
wait_for_source_fixes
branch
from
April 16, 2022 05:11
c1005d0
to
ecddb7c
Compare
It will then be stopped again after we've reached the desired position. This was the table repair will function normally and start replication again if it's a replica table and the IO and SQL threads are stopped. Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord
force-pushed
the
wait_for_source_fixes
branch
from
April 17, 2022 04:57
ecddb7c
to
cfb5aee
Compare
Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord
force-pushed
the
wait_for_source_fixes
branch
4 times, most recently
from
April 17, 2022 05:29
bf2d61b
to
d030da5
Compare
Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord
force-pushed
the
wait_for_source_fixes
branch
from
April 17, 2022 16:11
d030da5
to
80b18c3
Compare
…on exit A race between hitting the position and stopping the sql thread(s) again in MySQL, and the vitess operations going on. We could end up with another case of only one of the IO or SQL threads being stopped and the tablet repair never happening because of it. Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord
force-pushed
the
wait_for_source_fixes
branch
from
April 17, 2022 17:44
b6d2347
to
b3a7fb5
Compare
mattlord
requested review from
deepthi,
harshit-gangal and
systay
as code owners
April 17, 2022 18:37
Signed-off-by: Matt Lord <mattalord@gmail.com>
GuptaManan100
approved these changes
Apr 18, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an amazing catch and fix! I messed up the first time around. Then we had thought that we would only be wanting to wait for position only if there was some way to reach it. But, I see now that we mutated the state of replication, which this PR fixes! Thankyou for this. 💯💕
mattlord
added a commit
to planetscale/vitess
that referenced
this pull request
Apr 21, 2022
…sio#10104) After vitessio#9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before vitessio#9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com>
3 tasks
mattlord
added a commit
that referenced
this pull request
Apr 24, 2022
…ded (#10123) * Only start SQL thread temporarily to WaitForPosition if needed (#10104) After #9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before #9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com> * Use older replication status interface As release-13.0 does not have this: #9853 Signed-off-by: Matt Lord <mattalord@gmail.com>
3 tasks
notfelineit
pushed a commit
to planetscale/vitess
that referenced
this pull request
May 3, 2022
…sio#561) * Only start SQL thread temporarily to WaitForPosition if needed (vitessio#10104) After vitessio#9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before vitessio#9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com> * Use older replication status interface As vitess-private does not have this: vitessio#9853 Signed-off-by: Matt Lord <mattalord@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
After #9512 we always attempted to start the replication
SQL_Thread
(s) when waiting for a given position. The problem with this, however, is that if theSQL_Thread
is running but theIO_Thread
is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired.This changes the behavior so that:
SQL_Thread
(s) if it's not already runningSQL_Thread(s)
then we also explicitly reset it to what it was (stopped) as we exit the callBecause the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before #9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up.
Related Issue(s)
Checklist