Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I/O throttles content-init and maybe backups #9276

Closed
kylos101 opened this issue Apr 13, 2022 · 6 comments
Closed

I/O throttles content-init and maybe backups #9276

kylos101 opened this issue Apr 13, 2022 · 6 comments
Labels
team: workspace Issue belongs to the Workspace team

Comments

@kylos101
Copy link
Contributor

Bug description

Workspace start is slow when using iolimits, we think this is because content-init is being i/o limited.

We're unsure if backups are being limited.

Steps to reproduce

Try starting a workspace in a cluster where iolimit is defined in the configmap for ws-daemon.

Workspace affected

n/a

Expected behavior

Ideally we would not throttle workspaces on content-init or backup.

Example repository

n/a

Anything else?

I/O limiting was introduced here.

@utam0k
Copy link
Contributor

utam0k commented Apr 13, 2022

First of all, The custom limitation of the cgroup depends on the life cycle of the pod, making it difficult to control the timing. But we have some cgroup group, container/worksapce/user.

<container-cgorup>  drwxr-xr-x 3 root      root
 └── workspace      drwxr-xr-x 5 gitpodUid gitpodGid
   └── user         drwxr-xr-x 5 gitpodUid gitpodGid

We are currently applying restrictions to container cgroups. It means all processes in the workspace are limited.

I know timing is difficult to control but thought it might be smart to separate by the process. A user-run process should be sufficient to apply the restrictions. Other content-init, etc. should belong to the workspace cgroup, and these custom cgroup restrictions should apply to the user cgroup, which should solve these problems cleanly.

@utam0k
Copy link
Contributor

utam0k commented Apr 13, 2022

Relates codes.

for _, pidStr := range strings.Split(string(cgroupProcsBytes), "\n") {
if pidStr == "" || pidStr == "0" {
continue
}
if err := os.WriteFile(filepath.Join(newPath, "cgroup.procs"), []byte(pidStr), 0644); err != nil {
log.WithError(err).Warnf("failed to move process %s to cgroup %q", pidStr, newGroup)
}
}

@kylos101 kylos101 removed the status in 🌌 Workspace Team Apr 13, 2022
@kylos101 kylos101 changed the title I/O throttling for content-init and backups I/O throttles content-init and maybe backups Apr 13, 2022
@kylos101 kylos101 removed the priority: highest (user impact) Directly user impacting label Apr 13, 2022
@kylos101
Copy link
Contributor Author

@aledbf are we still throttling content-init when using iolimits?

For example, I'm unsure if it was resolved by:
#9309 or #9309

I ask to see if it is something @utam0k should focus on tomorrow or not. Otherwise his next priority is the .NET issue.

@csweichel
Copy link
Contributor

I still don't see how this should be the case. io.max is written to the cgroup of the workspace pod. Content init happens in the process tree of ws-daemon which cannot be subject to those limits.

@atduarte
Copy link
Contributor

Since IO limits were not working, do we still believe content-init and backups IO was throttled?

@kylos101
Copy link
Contributor Author

Closing this. [1]

Plus, we're actively working on IO limiting here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
team: workspace Issue belongs to the Workspace team
Projects
No open projects
Archived in project
Development

No branches or pull requests

4 participants