Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache the cluster info to reduce the cost of converting kafka cluster… #471

Merged
merged 3 commits into from
Jul 10, 2022

Conversation

chia7712
Copy link
Contributor

@chia7712 chia7712 commented Jul 9, 2022

兩個重點:

  1. Producer並不會一直更新Cluster,因此我們可以透過暫存來避免過多的轉換吃掉頻寬
  2. 讓RR的更新頻率可以被參數化
  3. 將RR的更新頻率預設值調整成與更新beans的時間一致

@chia7712 chia7712 requested a review from chinghongfang July 9, 2022 18:41
.map(ReplicaInfo::of)
.flatMap(Collection::stream)
.collect(Collectors.toUnmodifiableList());
if (info.isEmpty())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qoo332001 @garyparrot 我把這個檢查移除了,因為ClusterBean那端並沒有幫忙做這些檢查,而是讓拿到資料的人自己去做判斷,所以這邊就走一樣的作法,也順便簡化程式碼

@chia7712
Copy link
Contributor Author

chia7712 commented Jul 9, 2022

@chinghongfang 這隻PR合併後,你可否麻煩在實驗室環境中做測試?我這邊有做一個簡單的測試,在叢集中塞入一台狀況不好的機器,StrictCostDispatcher的表現比預設的partitioner還好,但希望你能用實驗室的設備再幫忙驗證一次

BTW,預設的StrictCostDispatcher並不需要任何額外的參數就可以運作的很好,因為他現在仰賴的是Producer metrics

@chia7712 chia7712 merged commit a64ebe5 into opensource4you:main Jul 10, 2022
@chia7712 chia7712 deleted the optimize_strict branch November 6, 2022 09:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant