-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kurtosis schedules pods to the same node, even if there are multiple nodes are available #953
Comments
Oh that's super weird; we don't touch the scheduling algorithm at all - just throw Pods at Kubernetes and let it do its thing. I suspect that it's related to your discussion on #952 , where - because the resource limits aren't getting set - Kubernetes thinks "oh these are very light Pods" and just throws them all on the same node, but in reality they're very heavy. If you were to hack in a |
And re.
Coming in the next 1-2 months ;) |
What's your CLI version?
0.80.12
Description & steps to reproduce
I use digitalocean as a kubernetes cluster. The cluster has 2 nodes, with max nodes set to 6.
I used the config below to deploy a workload on this cluster:
Its expected to spin up 8 pairs of ethereum nodes.
However, most of these pods getting killed due to running out of resources.
When inspecting the cluster, I can see that all the pods got scheduled to the same node, which will not be sufficient to run all these containers:
I have a feeling that kurtosis somehow trying to handle pod scheduling instead of letting the kubernetes scheduler to do this for it.
Desired behavior
Inspect how many nodes there are available, and based on that do Round Robin distribution of the node pairs into different machines.
Working some magic with an autoscaler would be icing on top.
What is the severity of this bug?
Painful; this is causing significant friction in my workflow.
The text was updated successfully, but these errors were encountered: