Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VTGate Healthcheck Cache Inconsistencies #9238

Closed
mattlord opened this issue Nov 15, 2021 · 1 comment · Fixed by #9237
Closed

VTGate Healthcheck Cache Inconsistencies #9238

mattlord opened this issue Nov 15, 2021 · 1 comment · Fixed by #9237
Assignees

Comments

@mattlord
Copy link
Contributor

Overview of the Issue

VTGate has a topology watcher which regularly gets tablet records from the topo server. Changes between what is in the topo server and what the watcher has in memory are used to manage adding/removing healthcheck cache records which the VTGate uses in query serving and is exposed in commands like SHOW VITESS_TABLETS.

The synchronization of the watcher tablet record cache and the various healthcheck related cache maps had some edge cases that resulted in incorrect SHOW VITESS_TABLET output where "zombie" tablet records would never go away. This was reported in #8465. Unfortunately the fix for that issue in #9106 then resulted in another potential inconsistency where a tablet was deleted and re-added with the same alias and host:port and thus ends up missing from the healthcheck cache. This could then result in the vtgate believing that there's no serving primary for a shard and break query serving.

In both cases where these internal data structures get out of sync, the only way to correct it is to bounce the vtgate.

Reproduction Steps

For the zombie tablet record you can see the old test case here.

For the missing primary serving tablet record I was able to repeat it using the PlanetScale Operator with these steps:

  1. Delete the primary vttablet pod
  2. Compare vttctlclient ListAllTablets output with vtgate's SHOW VITESS_TABLETS

If the new primary vttablet pod happens to start with the same host:port then the new primary serving tablet is not seen and you see this error:

mysql> select * from t1;
ERROR 1105 (HY000): target: mlordtest.-.primary: no healthy tablet available for 'keyspace:"mlordtest" shard:"-" tablet_type:PRIMARY'

When implementing #9106 I expected the healthcheck record to get re-added when we next checked the topology via the topology watcher, but if the topology watcher already has a tablet record with the same alias and host:port then it doesn't tell the healthcheck cache to do anything (no add, replace) so the record ends up missing in the healthcheck cache.

@deepthi
Copy link
Member

deepthi commented Nov 15, 2021

There is another failure mode with #9106. The deletion from healthcheck code races with the addition from topology_watcher when a tablet is restarted and comes back on a different host/port, which can lead to tablets missing from the healthcheck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants