Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

Memory usage doesn't match memory limit in container spec #1822

Closed
liwei opened this issue Jun 22, 2019 · 7 comments
Closed

Memory usage doesn't match memory limit in container spec #1822

liwei opened this issue Jun 22, 2019 · 7 comments
Labels
bug Incorrect behaviour needs-review Needs to be assessed by the team.

Comments

@liwei
Copy link
Member

liwei commented Jun 22, 2019

Description of problem

Running a container with memory limit == 4gb specified, but got much larger memory size when checking memory info in the container.

Expected result

$ docker run -ti --rm --runtime=kata -m 4g alpine
/ # free -m
             total       used       free     shared    buffers     cached
Mem:          4g        108       4g          8          2          9
-/+ buffers/cache:         97       4g

Actual result

$ docker run -ti --rm --runtime=kata -m 4g alpine
/ # free -m
             total       used       free     shared    buffers     cached
Mem:          6090        108       5982          8          2          9
-/+ buffers/cache:         97       5993
@liwei liwei added bug Incorrect behaviour needs-review Needs to be assessed by the team. labels Jun 22, 2019
@herenickname
Copy link

# docker run -d -p 25565:25565 -e EULA=TRUE -m 8G --cpus 4 --ulimit nofile=122880:122880 --name mc itzg/minecraft-server

# docker exec -it mc /bin/bash

bash-4.4# free -g
             total       used       free     shared    buffers     cached
Mem:             9          1          8          0          0          0
-/+ buffers/cache:          1          8
Swap:            0          0          0
bash-4.4#

expected 8G !== actual 9G

@grahamwhaley
Copy link
Contributor

Hi @ekifox - thanks for posting.
It is 'not quite trivial' for Kata here. Fundamentally, if Kata just gives the 8G you requested to the VM, then some of that gets consumed by the VM rootfs and agent (outside of your actual container workload), and you'd see that you had 8G, but some had already been used.
If we give you 8G 'free', plus some extra to cater for the VM overhead, then as you see, you get to see 9G, and not the 8G you expected. We can't win ;-)

I'm sure we had a document explaining this and some of the heuristics used to do the calculation - but, I cannot find it.... @egernst @jcvenegas - did I dream that, or did it maybe get part-cleaned up during the recent docs/arch re-org, and needs re-adding in somewhere? We should explain somewhere, as this is not the first time the question has been asked.

And then, all the other references to do with adding extra options are more to do with how kata reports its memory usage back to k8s I believe, so we can get the auto-balancing and pod/node allocation correct... so, note, this query itself was from docker.

@devimc
Copy link

devimc commented Dec 18, 2019

@ekifox VM was started with 1G [1] and 8G were hot plugged (-m 8G), the container in the VM has a memory constraint of 8G, the rest is used for overhead (guest kernel, kata-agent, systemd). If you want to honour the amount of memory then set it using [1] and don't use -m option.

[1] - https://github.com/kata-containers/runtime/blob/master/cli/config/configuration-qemu.toml.in#L78

@grahamwhaley
Copy link
Contributor

Thanks @devimc - I meant to ask if the container workload is then actually constrained in a memory cgroup to the size of the -m option :-). It's a shame free does not take that into account. Anybody know if there is an easy commandline way to find out what your real cgroup constrained memory availability is??

@herenickname
Copy link

herenickname commented Dec 18, 2019

Okay, thank you all for the answers! :)

But, then I tried to low RAM from 8G (9G) to 2G by docker update mc -m 2G
Nothing has happened, free -g in container shows 9G as it was before

it is impossible to lower RAM in runtime?

@grahamwhaley
Copy link
Contributor

Ah, I think that is a wrinkle to do with the complexities of VM memory hotplug - it is easy to plug memory in, but not so easy to unplug it (especially if it is now in use etc.). So, I think we can expand, but do not shrink. Also, I'm not sure if the update mc changes (shrinks) the memory cgroup around the container workload (but I'd like to think it did). In which case, free is still going to lie to you by telling you the 'system' level memory details, and not the memory cgroup level details.

speaking of which, I think you'll find the same/similar situation with free for a non-kata container - you will get the data about the whole host system, and not your individual container memory cgroup.

Yes, we do need to document this better ;-)

@jcvenegas
Copy link
Member

But, then I tried to low RAM from 8G (9G) to 2G by docker update mc -m 2G
Nothing has happened, free -g in container shows 9G as it was before

it is impossible to lower RAM in runtime?

It may be possible reduce the memory at runtime via virtio-balloon or virtiofs-mem, there is an old PR that tries to do that but, not it doe not have high priority, so for now that is a limitation.

The workload still be limited to the limits via cgroups in the guest side, but if your container initially used those 8G and those 8G will be assigned to the VM as it does not return pages back. In that case it may be better to stop that container and create a new container with the new memory limits.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Incorrect behaviour needs-review Needs to be assessed by the team.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants