Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource limits does not work #392

Closed
mengyangGIT opened this issue Apr 25, 2019 · 12 comments
Closed

Resource limits does not work #392

mengyangGIT opened this issue Apr 25, 2019 · 12 comments
Assignees
Labels
kind/bug Something isn't working

Comments

@mengyangGIT
Copy link

mengyangGIT commented Apr 25, 2019

Describe the bug

  1. deploy a pod and set the resources limit cpu:200m.
  2. run the command "while true" in that container
  3. get the top info to see the cpu percent

Expected behavior
excepted : cpu 20%
but whatever i set to the limit ,cpu percent is always 100%.

Additional context

OS: centos 7
kernel ver: 3.10.0-957.10.1.el7.x86_64
k3s ver: 0.4

@deniseschannon deniseschannon added the kind/bug Something isn't working label Apr 25, 2019
@ibuildthecloud ibuildthecloud added this to the v0.6.0 milestone Apr 25, 2019
@deniseschannon deniseschannon modified the milestones: v0.6.0, v0.7.0 May 28, 2019
@Duske
Copy link

Duske commented May 29, 2019

When booting up k3s with default settings it logs Disabling CPU quotas due to missing cpu.cfs_period_us.
Maybe it's related to this issue and helps.

@erikwilson erikwilson modified the milestones: v0.7.0, v1.0 - Backlog Jul 1, 2019
@joaovitor
Copy link

I got into this while executing this https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/

This creates the pod but expected OOMKilled on the next section is not happening:

kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example

....

kubectl get pod memory-demo-2 --namespace=mem-example

@galal-hussein
Copy link
Contributor

@mengyangGIT I am not able to reproduce the issue with the latest k3s version, here are my steps:

  • start k3s server
  • install metrics server
  • run the following yaml:
apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
  namespace: cpu-example
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
    - "2"
  • run kubectl top pods -n cpu-example

Result:

I can see that the limit is honored correctly:

✗ k top pods cpu-demo -n cpu-example
NAME       CPU(cores)   MEMORY(bytes)   
cpu-demo   991m         1Mi

I also tried the example with exceeding the cpu limit:

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo-2
  namespace: cpu-example
spec:
  containers:
  - name: cpu-demo-ctr-2
    image: vish/stress
    resources:
      limits:
        cpu: "100"
      requests:
        cpu: "100"
    args:
    - -cpus
    - "2"

which requests 100 cpu, and the pod didn't start as expected:

k describe pods/cpu-demo-2 -n cpu-example
.....
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  27s (x5 over 6m24s)  default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

@galal-hussein
Copy link
Contributor

@joaovitor I was able to reproduce this case, the OOMkiller doesn't seem to be invoked, however I noticed that the container is not exceeding the memory limit configured for the pod running the stress command:

k top pod -n mem-example
NAME            CPU(cores)   MEMORY(bytes)   
memory-demo-2   18m          99Mi 
bash-4.3# ps -o pid,user,rss,vsz,comm ax
PID   USER     RSS  VSZ  COMMAND
    1 root        0  740 stress
    6 root      83m 250m stress
    7 root      196 6212 bash
   15 root        4 1520 ps
root@pop-os:/sys/fs/cgroup/memory/kubepods/burstable# cat pod51a6febe-4d87-4c7c-beff-5ead07df2da5/cad6693e3974a359b4ea0ef193a4998bce376e45a1fdcacc671e9643bcab1096/memory.limit_in_bytes 
104857600
root@pop-os:/sys/fs/cgroup/memory/kubepods/burstable# cat pod51a6febe-4d87-4c7c-beff-5ead07df2da5/cad6693e3974a359b4ea0ef193a4998bce376e45a1fdcacc671e9643bcab1096/memory.usage_in_bytes 
103915520

I was able to see the OOMkiller being invoked in a rke cluster with the same yaml file

cc @erikwilson

@galal-hussein
Copy link
Contributor

@joaovitor The issue is happening because the swap is enabled on the system, If the swap is enabled then the OOMkiller will not be triggered until there is no memory left in the swap.

@cjellick
Copy link
Contributor

Closing as it is expected behavior when swap is enabled

@YaoC
Copy link

YaoC commented Nov 13, 2019

Cpu limits not work because the flag hasCFS checkCgroups return is false

I found that in kernel 3.10.0-x, the cpu subsystems in /proc/{pid}/cgroup is cpuacct,cpu while in /sys/fs/cgroup it is cpu,cpuacct (out of order) , which make k3s find wrong path of cpu.cfs_period_us.

I think it's a bug of kernel 3.10, to solve this problem, you can create a link from cpuacct,cpu to cpu,cpuacct like below

sudo mount -o remount,rw '/sys/fs/cgroup'

sudo ln -s /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuacct,cpu

sudo systemctl restart  k3s

@pkoltermann
Copy link

pkoltermann commented Mar 27, 2021

The fix is not working for me:
k3d running k3s:

k3d cluster create 1-20 --image rancher/k3s:v1.20.5-rc1-k3s1

logs:

docker ps                                                   
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                             NAMES
cad43f091333        rancher/k3s:v1.20.5-rc1-k3s1   "/bin/k3s server --t…"   28 minutes ago      Up 28 minutes                                         k3d-1-20-server-0

docker logs cad43f091333
...
time="2021-03-27T20:56:01.397056042Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
...

docker exec cad43f091333 k3s --version
k3s version v1.20.5-rc1+k3s1 (355fff30)
go version go1.15.10

OS: ubuntu 20.04

@brandond
Copy link
Member

@pkoltermann can you confirm that it does not work when run outside of docker? I suspect docker may not be presenting all the correct cgroups to enable nested resource limits.

@pkoltermann
Copy link

@brandond You are right, if I run it on the host machine it works. The question is how to make it work in docker?

@brandond
Copy link
Member

brandond commented Mar 28, 2021

I would probably take this question to the k3d issue tracker.

@max-mulawa
Copy link

@galal-hussein I can only confirm that after disabling swap (it gets re-enabled after reboot in my case) and restarting k3s service the memory limits started working as expected. On Ubuntu I did the following.

Turn off all swaps
swapoff –a

Restart k3
systemctl restart k3s.service

After that the memory 'thirsty' pod was being restarted each time it reached the memory limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests