Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unified solution for TCP memory limitation #365

Closed
wants to merge 4 commits into from

Conversation

lubinszARM
Copy link
Contributor

What type of PR is this?

Features

What this PR does / why we need it:

The feature provides the unified solution for TCP memory limitation in cgroup and host level.

Which issue(s) this PR fixes:

Special notes for your reviewer:

The feature includes 3 parts:
1, Set the limit value for host net.ipv4.tcp_mem.
The default value is 20% of host toal memory.
2, Do nothing for cgroupv2.
3, Set pod tcp_mem accounting for cgroupv1.
The default value is same with memory.limit_in_bytes.

Signed-off-by: Robin Lu <robin.lu@bytedance.com>
Signed-off-by: Robin Lu <robin.lu@bytedance.com>
Signed-off-by: Robin Lu <robin.lu@bytedance.com>
Copy link

codecov bot commented Nov 15, 2023

Codecov Report

Attention: 102 lines in your changes are missing coverage. Please review.

Comparison is base (ad438bc) 53.35% compared to head (a20579b) 53.43%.
Report is 15 commits behind head on main.

Files Patch % Lines
...ugins/memory/dynamicpolicy/policy_async_handler.go 0.00% 52 Missing ⚠️
...gins/memory/dynamicpolicy/sockmem/sockmem_linux.go 61.36% 30 Missing and 4 partials ⚠️
...md/katalyst-agent/app/options/qrm/memory_plugin.go 57.14% 6 Missing ⚠️
pkg/util/cgroup/manager/v1/fs_linux.go 0.00% 5 Missing and 1 partial ⚠️
...g/agent/qrm-plugins/memory/dynamicpolicy/policy.go 50.00% 3 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #365      +/-   ##
==========================================
+ Coverage   53.35%   53.43%   +0.07%     
==========================================
  Files         436      438       +2     
  Lines       47963    48310     +347     
==========================================
+ Hits        25592    25813     +221     
- Misses      19490    19579      +89     
- Partials     2881     2918      +37     
Flag Coverage Δ
unittest 53.43% <40.00%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@lubinszARM lubinszARM force-pushed the pr_sockmem branch 2 times, most recently from 8e89c3f to cad3cb5 Compare November 15, 2023 09:57
@@ -245,6 +252,11 @@ func (p *DynamicPolicy) Start() (err error) {
go wait.Until(p.setMemoryMigrate, setMemoryMigratePeriod, p.stopCh)
}

if p.enableSettingSockMem {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

register as RegisterPeriodicalHandler will be better

* 2, do nothing for cgroupv2.
* 3, set pod tcp_mem accounting for cgroupv1.
*/
func (p *DynamicPolicy) setSockMemLimit() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will it be better if we add a new dir handlers for RegisterPeriodicalHandler ? @csfldf

@waynepeking348 waynepeking348 added enhancement New feature or request workflow/merge-ready merge-ready: code is ready and can be merged workflow/need-review review: test succeeded, need to review labels Nov 15, 2023
@lubinszARM lubinszARM force-pushed the pr_sockmem branch 3 times, most recently from 2d539c5 to 7da0e19 Compare November 15, 2023 13:25
Signed-off-by: Robin Lu <robin.lu@bytedance.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request workflow/merge-ready merge-ready: code is ready and can be merged workflow/need-review review: test succeeded, need to review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants