-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove unnecessary usage of FLUSH TABLES WITH READ LOCK #4849
Conversation
// If the master is still alive, then we need to demote it gracefully | ||
// make it read-only, flush the writes and get the position | ||
// By the time this method executes, the server should already have @@global.read_only=1 | ||
// Thus, we should be able to get a consistent master position. | ||
func (mysqld *Mysqld) DemoteMaster() (rp mysql.Position, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you approve of this idea, we could even get rid of the DemoteMaster
method and associated RPC calls, since this becomes just a wrapper around MasterPosition
.
We can use the MasterPosition
RPC definitions instead. I am not sure how to update the protobuf definitions everywhere though...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's change the call sites to use MasterPosition
, but leave DemoteMaster
as an empty shell for backward compatibility: If people upgrade vitess, the call still needs to respond if an older vtgate calls into a newer vttablet.
We should also change the comment to say that this function is deprecated.
During PlannedReparentShard, the steps vitess takes to get replication position are basically (on the old master): 1. set @@global.read_only=true 2. FLUSH TABLES WITH READ LOCK 3. UNLOCK TABLES 4. SELECT @@GLOBAL.gtid_executed Steps 2 and 3 should not be necessary. After setting read_only=true, we should be able to get a consistent gtid_executed position. And FTWRL needs to wait for long running SELECTs to finish, so it seems unnecessarily expensive. By eliminating steps 2 and 3 we could reduce the amount of time that vitess is unable to serve queries. Signed-off-by: dleibovic <dleibovic@etsy.com>
@sougou updated; is this what you had in mind? Upon closer inspection, I believe the RPC call only exists for |
…ss (#4849) * cherry pick of 15503 * Fix conflicts Signed-off-by: Rohit Nayak <rohit@planetscale.com> * Generate CI workflows to update vreplication_multi_tenant to use vitess-private's template Signed-off-by: Rohit Nayak <rohit@planetscale.com> --------- Signed-off-by: Rohit Nayak <rohit@planetscale.com> Co-authored-by: Rohit Nayak <rohit@planetscale.com>
Slack discussion: https://vitess.slack.com/archives/C0PQY0PTK/p1556646127022300
During PlannedReparentShard, the steps vitess takes to get replication position are basically (on the old master):
Steps 2 and 3 should not be necessary. After setting read_only=true, we should be able to get a consistent gtid_executed position. And FTWRL needs to wait for long running SELECTs to finish, so it seems unnecessarily expensive. By eliminating steps 2 and 3 we could reduce the amount of time that vitess is unable to serve queries.
Signed-off-by: dleibovic dleibovic@etsy.com