-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sink(ticdc): set max-message-bytes default to 10m #4036
sink(ticdc): set max-message-bytes default to 10m #4036
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
/run-all-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rest lgtm
/run-all-tests |
Codecov Report
Flags with carried forward coverage won't be shown. Click here to find out more. @@ Coverage Diff @@
## master #4036 +/- ##
================================================
- Coverage 57.0741% 55.2870% -1.7872%
================================================
Files 478 486 +8
Lines 56551 60043 +3492
================================================
+ Hits 32276 33196 +920
- Misses 20978 23498 +2520
- Partials 3297 3349 +52 |
@@ -345,7 +345,7 @@ func kafkaClientID(role, captureAddr, changefeedID, configuredClientID string) ( | |||
return | |||
} | |||
|
|||
func validateMaxMessageBytesAndCreateTopic(admin kafka.ClusterAdminClient, topic string, config *Config) error { | |||
func validateMaxMessageBytesAndCreateTopic(admin kafka.ClusterAdminClient, topic string, config *Config, saramaConfig *sarama.Config) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This func seems strange at the moment. Can we split it up?
- getMaxMessageBytes is responsible for getting the right value
- createTopic is used to create the topic
This way we won't have to pass two configurations and modify them at the same time. Now this function has become very complicated. Originally we didn't modify Sarama's configuration in this method. Now it is not only responsible for creating the topic, but also for setting the sarma configuration correctly, but this configuration is not directly related to creating the topic(It will affect when syncing, but it may be a pre-condition. So I think it can be separated). It does too many things. I prefer to keep them separate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a better way to get it not to set it up at the same time that would work too. I can only think of separating it at the moment.
The sarma configuration has been patched once above via kafka's configuration, but we're modifying it at the same time in this function.
It is currently confusing from either the caller's or the test code's point of view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This worthy another PR to do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better to do it in this time. It should just be a simple split method will work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think it would be better to do this after the release, to prevent some potential new problems.
/merge |
This pull request has been accepted and is ready to merge. Commit hash: a1c0a4c
|
@3AceShowHand: Your PR was out of date, I have automatically updated it for you. At the same time I will also trigger all tests for you: /run-all-tests If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
In response to a cherrypick label: new pull request created: #4059. |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
In response to a cherrypick label: new pull request created: #4060. |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
In response to a cherrypick label: new pull request created: #4061. |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
In response to a cherrypick label: new pull request created: #4062. |
In response to a cherrypick label: new pull request created: #4063. |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
* This is an automated cherry-pick of #4036 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> * remove craft. * try to fix. * try to fix. * remove config. * sink. * add mq_sink_protocol. * add new error. * fix json. * fix json test. * fix kafka. * fix kafka tets. * remove kafka config. * fix. * tiny fix. * fix. * fix kafka test. * fix kafka test. * update toml Co-authored-by: Ling Jin <7138436+3AceShowHand@users.noreply.github.com> Co-authored-by: 3AceShowHand <jinl1037@hotmail.com>
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
* This is an automated cherry-pick of #4036 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> * remove config. * fix kafka. * This is an automated cherry-pick of #4074 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> * Resolve mq. * remove unncessary mock impl. * try fix . * resolve conflict. Co-authored-by: Ling Jin <7138436+3AceShowHand@users.noreply.github.com> Co-authored-by: 3AceShowHand <jinl1037@hotmail.com>
What problem does this PR solve?
close #4041
set Kafka producer's default
max-message-bytes
to 10M, and use the minimum value among sink-uri'smax-message-bytes
, broker'smessage.max.bytes
, topic'smax.message.bytes
to initialize the producer.What is changed and how it works?
max-message-bytes
to 10MCheck List
Tests
Code changes
Side effects
Related changes
Release note