Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TiCDC sink initialization should check topic or broker's max message bytes configuration for all kinds of protocols. #4041

Closed
3AceShowHand opened this issue Dec 23, 2021 · 0 comments · Fixed by #4036
Assignees
Labels
area/ticdc Issues or PRs related to TiCDC. type/bug The issue is confirmed as a bug.

Comments

@3AceShowHand
Copy link
Contributor

3AceShowHand commented Dec 23, 2021

What did you do?

In v4.0.16, if use non open-protocol, such as canal-json, canal, avro to initialize the Kafka sink, changefeed may meet message too large error from the Sarama producer.

Furthermore, the TiCDC cluster was upgraded from v4.0.14 to v4.0.16, we should also consider upgrade compatibility, which means old existing changefeeds should boost and run normal.

What did you expect to see?

changefeed works normally, no matter which kind of protocol.

What did you see instead?

[CDC:ErrKafkaAsyncSendMessage]kafka: Failed to produce message to topic cdc-eps_rt_ods-ods_market_flw_lx_app_log_prod: kafka server: Message was too large, server rejected it to avoid allocation error."]

Versions of the cluster

Upstream TiDB cluster version (execute SELECT tidb_version(); in a MySQL client):

(paste TiDB cluster version here)

TiCDC version (execute cdc version):

v4.0.16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ticdc Issues or PRs related to TiCDC. type/bug The issue is confirmed as a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant