Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drainer should not retry on long time ddls directly #1169

Open
lichunzhu opened this issue Jun 21, 2022 · 1 comment
Open

Drainer should not retry on long time ddls directly #1169

lichunzhu opened this issue Jun 21, 2022 · 1 comment
Labels
feature-request This issue is a feature request

Comments

@lichunzhu
Copy link
Contributor

What did you do?

Use Drainer to replicate some long time costing ddls to downstream TiDB. For example, adding index ddls.

What did you expect to see?

Drainer can replicate ddls successfully.

What did you see instead?

Drainer fails because of i/o timeout and keeps retrying to replicate these ddls again.

Versions of the cluster

master(b0214a2)

@lichunzhu lichunzhu added the feature-request This issue is a feature request label Jun 21, 2022
@lichunzhu
Copy link
Contributor Author

Root Cause

When Drainer executes some time-costing ddls, especially for adding index, Drainer may fail to get result because it won't return any result until it succeeds. If this time cost is larger than syncer's read-timeout, Drainer will fail and try to execute ddl again. This will make the situation even worse.

Workaround

For these long time costing ddls, we can use a special connection to execute them and keep watching the result asynchronously through admin show ddl jobs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request This issue is a feature request
Projects
None yet
Development

No branches or pull requests

1 participant