-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cpu pinning to nodes configuration #3138
Comments
Wow! Can you elaborate more on the use case? While kind technically supports multi-node, this is for https://kind.sigs.k8s.io/docs/contributing/project-scope/#p0-support-testing-kubernetes It's not intended as a reasonable solution for performance testing.
Mmm, is there a comparable flag in podman etc? We've tried to minimize coupling to docker going forward so we can attempt to meet demands to support other tools (e.g. also nerdctl, ignite...) What's the expected user experience here? Users need to select CPUs for each node? |
Is this correctly transitively applied to the "nested" containers? Under what environment have you tested this? (It's been a while since looking at this part of the system and we have podman/docker, runc/crun, rootless/rootful, ...) |
Kubernetes usually scale tests with multiple VMs or physical machines, this is a new one 😅 |
xref: #3131 |
Hi @BenTheElder Trying to give a bit of context of why we thought about doing this :
We are working on a CNI integration (Calico/VPP), and wanted a way to do functional testing of clusters of honest size. The idea is not to do scale testing in the sense to measure performance, but rather to make sure things keep working if we push cluster sizes beyond the usual 4 nodes. This also allows a variety of test scenarios (e.g. how does CNI react to many nodes going down at the same time, etc...)
I guess so, I don't have a strong opinion there, we went that way just because it gave the most flexibility.
iirc in our testing this was transitively applied (@hedibouattour keep we honest if it wasn't the case 😄)
It was a pretty standard env :
I guess it's the appeal for container-ception 😂 |
I have done this in the past, it is an easy and cheap way to find "scale" bugs, and containers allows you to avoid developing mocks, you just kind of "miniaturize everything" ... said this, I can't see how this can be generalizable, in my experience , and if you are building some scale testing framework, this is going to evolve to support different options, I think that you'll be better if you use kind as an API to create the containers so you build your framework on top of it so you can customize it |
Right, I agree on the fact that we probably don't have a full picture of the consequences the Would having an more generic option e.g. |
no. It's an implementation detail that we exec docker and it will be a nightmare to support this in the future. Consider if kind needs to start setting an argument that previously a user set or if they conflict or if we opt to switch to a client library. |
We can support pinning nodes but we need more investigation and some thought about the abstraction before committing. Typically we try to borrow from CRI if possible. we've been bitten many times by thin abstractions and compat nightmares. See e.g. #2875 where we're in a bind because we didn't shim this like kOps did. |
What would you like to be added:
The possibility to pin nodes to cpus in kind clusters.
Why is this needed:
For large clusters (scale testing purposes), nodes overload the cpu. So we need to pin kind nodes to sets of specific cpus during their creation.
As an example, on a 96 thread machine, kind clusters with Calico start to show instability above 30 nodes, this patch enables stability with 60 kind workers, by pinning every node to a cpu.
Docker allows this using the
--cpuset-cpus
flag. The easiest way is to add that to the node configuration like in #3131.The text was updated successfully, but these errors were encountered: