Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manage MySQL Replication Status States Properly #9853

Merged
merged 9 commits into from
Mar 11, 2022

Conversation

mattlord
Copy link
Contributor

@mattlord mattlord commented Mar 9, 2022

Description

We considered the Slave_IO_Running state of Connecting as equivalent to Running in the replication status results that we get from MySQL. I'm assuming this was done to avoid flapping on low traffic systems due to the -slave_net_timeout reconnects and avoid doing a tablet repair when replication was healthy (with this PR we can distinguish running from healthy states and take differing actions depending on what we want).

After #9308 we properly estimate the replica lag when MySQL is telling us that it does not know, meaning it returns a NULL value for the Seconds_Behind_Master field. But in some cases — e.g. it appears to happen when attempting the first connection to the source — MySQL will report a Seconds_Behind_Master value of 0 (meaning fully caught up and no lag) even when it is not connected to its replication source and has failed to reconnect — meaning that this is not a simple reconnect for any reason (e.g. -slave_net_timeout), but a reconnect with one or more failures/errors. This PR handles this latter case within Vitess by continuing to treat Connecting as equivalent to Running via ReplicationStatus.Healthy() — to prevent the noted flapping and errant/unnecessary tablet repairs — unless we had an IO error the last time we tried to reconnect to the replication source.

Note: I think that this was also what in effect caused the bug seen in #9788 as the new replica tablet was considered up with replication running when in fact it had not been able to connect to its source (for the very first time)

Related Issue(s)

Checklist

  • Should this PR be backported? NO
  • Tests are not required (I don't think)
  • Documentation is not required

…failed

Signed-off-by: Matt Lord <mattalord@gmail.com>
…ationStatus

Signed-off-by: Matt Lord <mattalord@gmail.com>
Signed-off-by: Matt Lord <mattalord@gmail.com>
Signed-off-by: Matt Lord <mattalord@gmail.com>
Signed-off-by: Matt Lord <mattalord@gmail.com>
And make function names nicer (removing double context)

And add some helper functions

Signed-off-by: Matt Lord <mattalord@gmail.com>
Signed-off-by: Matt Lord <mattalord@gmail.com>
Signed-off-by: Matt Lord <mattalord@gmail.com>
Update vtadmin web protobufs

Signed-off-by: Matt Lord <mattalord@gmail.com>
@mattlord
Copy link
Contributor Author

mattlord commented Mar 9, 2022

@deepthi, @GuptaManan100, and @shlomi-noach: I apologize for the review hassle but unfortunately creating a new PR was the only way forward (see here for details). In the process, I consolidated the changes since the last round of reviews — focused on safely modifying the protobuf to fit the new model — into a single commit: 65226ad

Thanks in advance! 🙇‍♂️

Copy link
Member

@GuptaManan100 GuptaManan100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel sad that you had to jump through so many hoops for getting this in 😞

@deepthi deepthi merged commit a8c9636 into vitessio:main Mar 11, 2022
@deepthi deepthi deleted the repl_status branch March 11, 2022 17:56
mattlord added a commit to planetscale/vitess that referenced this pull request Apr 23, 2022
As release-13.0 does not have this:
  vitessio#9853

Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord added a commit that referenced this pull request Apr 24, 2022
…ded (#10123)

* Only start SQL thread temporarily to WaitForPosition if needed (#10104)

After #9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired.

This changes the behavior so that:
  1. We only attempt to start the SQL_Thread(s) if it's not already running
  2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call

Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before #9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up.

Signed-off-by: Matt Lord <mattalord@gmail.com>

* Use older replication status interface

As release-13.0 does not have this:
  #9853

Signed-off-by: Matt Lord <mattalord@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug Report: vttablet replica is considered healthy (serving) even when not connect to primary
3 participants