You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
now DM master lost the relay status of that worker in memory, in other words, it will treat that worker as a free worker. but the worker is indeed pulling relay logs.
the inconsistency will become a problem when master tries to bound another source to the worker. At then worker will report same error like pingcap/dm#2204
What did you expect to see?
above integration test will pass, which means after master and worker restarts, two worker status of source2 is bound + relay
What did you see instead?
two worker status of source2 is bound + free
Versions of the cluster
DM version (run dmctl -V or dm-worker -V or dm-master -V):
master (5.3.0)
current status of DM cluster (execute query-status <task-name> in dmctl)
(paste current status of DM cluster here)
The text was updated successfully, but these errors were encountered:
lance6716
changed the title
scheduler failed to keep consistent relay status if master/worker restart in a order
scheduler failed to keep consistent relay status if master/worker restart in particular order
Nov 16, 2021
What did you do?
https://github.com/pingcap/ticdc/blob/bc6029e22fbf38612184765ee802431663e4fa10/dm/tests/new_relay/run.sh#L86
prerequisite
start-relay -s source1 worker1
now DM master lost the relay status of that worker in memory, in other words, it will treat that worker as a free worker. but the worker is indeed pulling relay logs.
the inconsistency will become a problem when master tries to bound another source to the worker. At then worker will report same error like pingcap/dm#2204
What did you expect to see?
above integration test will pass, which means after master and worker restarts, two worker status of source2 is bound + relay
What did you see instead?
two worker status of source2 is bound + free
Versions of the cluster
DM version (run
dmctl -V
ordm-worker -V
ordm-master -V
):master (5.3.0)
current status of DM cluster (execute
query-status <task-name>
in dmctl)(paste current status of DM cluster here)
The text was updated successfully, but these errors were encountered: