-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource limits does not work #392
Comments
When booting up k3s with default settings it logs |
I got into this while executing this https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/ This creates the pod but expected OOMKilled on the next section is not happening: kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example .... kubectl get pod memory-demo-2 --namespace=mem-example |
@mengyangGIT I am not able to reproduce the issue with the latest k3s version, here are my steps:
Result: I can see that the limit is honored correctly:
I also tried the example with exceeding the cpu limit:
which requests 100 cpu, and the pod didn't start as expected:
|
@joaovitor I was able to reproduce this case, the OOMkiller doesn't seem to be invoked, however I noticed that the container is not exceeding the memory limit configured for the pod running the stress command:
I was able to see the OOMkiller being invoked in a rke cluster with the same yaml file cc @erikwilson |
@joaovitor The issue is happening because the swap is enabled on the system, If the swap is enabled then the OOMkiller will not be triggered until there is no memory left in the swap. |
Closing as it is expected behavior when swap is enabled |
Cpu limits not work because the flag I found that in kernel 3.10.0-x, the cpu subsystems in I think it's a bug of kernel 3.10, to solve this problem, you can create a link from
|
The fix is not working for me:
logs:
OS: ubuntu 20.04 |
@pkoltermann can you confirm that it does not work when run outside of docker? I suspect docker may not be presenting all the correct cgroups to enable nested resource limits. |
@brandond You are right, if I run it on the host machine it works. The question is how to make it work in docker? |
I would probably take this question to the k3d issue tracker. |
@galal-hussein I can only confirm that after disabling swap (it gets re-enabled after reboot in my case) and restarting k3s service the memory limits started working as expected. On Ubuntu I did the following. Turn off all swaps Restart k3 After that the memory 'thirsty' pod was being restarted each time it reached the memory limit. |
Describe the bug
Expected behavior
excepted : cpu 20%
but whatever i set to the limit ,cpu percent is always 100%.
Additional context
OS: centos 7
kernel ver: 3.10.0-957.10.1.el7.x86_64
k3s ver: 0.4
The text was updated successfully, but these errors were encountered: