Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

How to limit the memory of pod? #400

Closed
weekface opened this issue Jun 14, 2018 · 9 comments
Closed

How to limit the memory of pod? #400

weekface opened this issue Jun 14, 2018 · 9 comments
Assignees

Comments

@weekface
Copy link

weekface commented Jun 14, 2018

Description of problem

I followed this doc and created a pod with 1Gi memory:

$ cat vm_pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-untrusted
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  containers:
  - name: nginx
    image: nginx
    resources:
      limits:
        memory: "1Gi"
        cpu: "5"
      requests:
        memory: "1Gi"
        cpu: "5"

Expected result

1Gi memory allocated to this pod

Actual result

But 2048M was allocated:

/usr/bin/qemu-lite-system-x86_64 -name sandbox-4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27 -uuid d7f7731b-fbaa-473d-bc54-515832cde672 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/vc/sbs/4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27/mon-d7f7731b-fbaa-473d-bc54-51,server,nowait -qmp unix:/run/vc/sbs/4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27/ctl-d7f7731b-fbaa-473d-bc54-51,server,nowait -m 2048M,slots=2,maxmem=129679M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2 -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/sbs/4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/kata-containers/kata-containers-image_clearlinux_agent_a099747.img,size=536870912 -device virtio-scsi-pci,id=scsi0 -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/sbs/4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27/kata.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-kataShared,mount_tag=kataShared -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27,security_model=none -netdev tap,id=network-0,vhost=on,vhostfds=3:4:5:6:7:8:9:10,fds=11:12:13:14:15:16:17:18 -device driver=virtio-net-pci,netdev=network-0,mac=9e:63:15:fc:60:de,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/kata-containers/vmlinuz-4.14.22.1-130.1.container -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 quiet systemd.show_status=false panic=1 initcall_debug nr_cpus=32 ip=::::::4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27::off:: init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket -smp 1,cores=1,threads=1,sockets=1,maxcpus=32


Meta details

Running kata-collect-data.sh version 1.0.0 (commit de76c138bcc03442ce80312f1fc6365461241edf) at 2018-06-14.17:28:18.008997410+0800.


Runtime is /usr/bin/kata-runtime.

kata-env

Output of "/usr/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.12"

[Runtime]
  Debug = false
  [Runtime.Version]
    Semver = "1.0.0"
    Commit = "de76c138bcc03442ce80312f1fc6365461241edf"
    OCI = "1.0.1"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  Msize9p = 8192
  Debug = false

[Image]
  Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_agent_a099747.img"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-4.14.22.1-130.1.container"
  Parameters = ""

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.0.0"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = false

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.0.0"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = false

[Agent]
  Type = "kata"

[Host]
  Kernel = "3.10.0-862.3.2.el7.x86_64"
  Architecture = "amd64"
  VMContainerCapable = true
  [Host.Distro]
    Name = "CentOS Linux"
    Version = "7"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz"

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found
Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2018-06-12T03:55:46.913416409+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "22950"
  packages:
    default:
      - "iptables-bin"
      - "libudev0-shim"
      - "systemd"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.0.0-a099747be287d30d7f1efcd6ba2bda88fc4a0f15"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2018-06-14T17:23:03.439915181+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-ab0d5bba-bf13-425a-bee7-02 original-name=mon-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164166 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:03.439955493+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-ab0d5bba-bf13-425a-bee7-02 original-name=ctl-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164166 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.541938434+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164268 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.541982924+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164268 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.550793659+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164279 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.55086789+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164279 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.679012987+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164291 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.679070312+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164291 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.680308766+08:00" level=error msg="Container 4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27 not ready or running, cannot send a signal" command=kill name=kata-runtime pid=164291 source=runtime
time="2018-06-14T17:23:29.688448711+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164300 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.688490726+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164300 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.700010443+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164312 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.700070329+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164312 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.701124753+08:00" level=error msg="Container 4da9da08bc50ca806898946cb19f6e5c8b35790b25d1e12e02738bc50e7a6f27 not ready or running, cannot send a signal" command=kill name=kata-runtime pid=164312 source=runtime
time="2018-06-14T17:23:29.709385112+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-d7f7731b-fbaa-473d-bc54-51 original-name=mon-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164325 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:29.709442858+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-d7f7731b-fbaa-473d-bc54-51 original-name=ctl-d7f7731b-fbaa-473d-bc54-515832cde672 pid=164325 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:43.450629073+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-ab0d5bba-bf13-425a-bee7-02 original-name=mon-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164412 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:43.45066936+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-ab0d5bba-bf13-425a-bee7-02 original-name=ctl-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164412 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:57.623790329+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164494 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:57.623860161+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164494 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:57.716751742+08:00" level=warning msg="unsupported address" address="fe80::fceb:fcff:fe77:8513/64" arch=amd64 name=kata-runtime pid=164494 source=virtcontainers subsystem=kata_agent unsupported-address-type=ipv6
time="2018-06-14T17:23:57.716847868+08:00" level=warning msg="unsupported route" arch=amd64 destination="fe80::/64" name=kata-runtime pid=164494 source=virtcontainers subsystem=kata_agent unsupported-route-type=ipv6
time="2018-06-14T17:23:58.764642034+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164546 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.764696226+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164546 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.773490768+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164558 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.773553892+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164558 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.784008887+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164568 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.784068442+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164568 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.861908736+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164576 source=virtcontainers subsystem=qemu
time="2018-06-14T17:23:58.861958333+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164576 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.764603622+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164596 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.764654602+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164596 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.788929655+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164612 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.788967016+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164612 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.87600466+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164654 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.876045542+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164654 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.884463865+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164662 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.884507234+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164662 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.896066639+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164672 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.896113014+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164672 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.973072456+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164680 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:06.973116706+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164680 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:52.090665995+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-ab0d5bba-bf13-425a-bee7-02 original-name=mon-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164816 source=virtcontainers subsystem=qemu
time="2018-06-14T17:24:52.090698563+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-ab0d5bba-bf13-425a-bee7-02 original-name=ctl-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=164816 source=virtcontainers subsystem=qemu
time="2018-06-14T17:25:12.094205742+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-8f83b909-4a48-4438-90aa-e2 original-name=mon-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164816 source=virtcontainers subsystem=qemu
time="2018-06-14T17:25:12.094249903+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-8f83b909-4a48-4438-90aa-e2 original-name=ctl-8f83b909-4a48-4438-90aa-e26a3ab6aaba pid=164816 source=virtcontainers subsystem=qemu
time="2018-06-14T17:25:53.461130641+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-ab0d5bba-bf13-425a-bee7-02 original-name=mon-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=165001 source=virtcontainers subsystem=qemu
time="2018-06-14T17:25:53.461170641+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-ab0d5bba-bf13-425a-bee7-02 original-name=ctl-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=165001 source=virtcontainers subsystem=qemu
time="2018-06-14T17:28:05.438754346+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=mon-ab0d5bba-bf13-425a-bee7-02 original-name=mon-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=165754 source=virtcontainers subsystem=qemu
time="2018-06-14T17:28:05.438803001+08:00" level=warning msg="shortening QMP socket name" arch=amd64 name=kata-runtime new-name=ctl-ab0d5bba-bf13-425a-bee7-02 original-name=ctl-ab0d5bba-bf13-425a-bee7-0288531d3496 pid=165754 source=virtcontainers subsystem=qemu

Proxy logs

Recent proxy problems found in system journal:

time="2018-06-14T17:03:21.566818184+08:00" level=fatal msg="session shutdown" name=kata-proxy pid=156427 source=proxy

Shim logs

Recent shim problems found in system journal:

time="2018-06-14T17:03:21.567517368+08:00" level=warning msg="copy stdin failed" container=52237d6af4e43a11bd92ce63ef6422808b36f16de033f73a93eb3d9548f27474 error="rpc error: code = Unavailable desc = transport is closing" exec-id=a50136e5-ee5f-48da-8166-23cca8295a3a name=kata-shim pid=12 source=shim
time="2018-06-14T17:03:21.56763922+08:00" level=warning msg="close stdin failed" container=52237d6af4e43a11bd92ce63ef6422808b36f16de033f73a93eb3d9548f27474 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure" exec-id=a50136e5-ee5f-48da-8166-23cca8295a3a name=kata-shim pid=12 source=shim
time="2018-06-14T17:03:21.56754911+08:00" level=error msg="failed waiting for process" container=52237d6af4e43a11bd92ce63ef6422808b36f16de033f73a93eb3d9548f27474 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure" exec-id=a50136e5-ee5f-48da-8166-23cca8295a3a name=kata-shim pid=12 source=shim
time="2018-06-14T17:03:21.567609574+08:00" level=error msg="failed waiting for process" container=52237d6af4e43a11bd92ce63ef6422808b36f16de033f73a93eb3d9548f27474 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure" exec-id=52237d6af4e43a11bd92ce63ef6422808b36f16de033f73a93eb3d9548f27474 name=kata-shim pid=1 source=shim
time="2018-06-14T17:03:21.5676665+08:00" level=error msg="failed waiting for process" container=2c730390203bc97396f7566ef54f7bf57d13850f84bb474eb39669db07dc78f1 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure" exec-id=2c730390203bc97396f7566ef54f7bf57d13850f84bb474eb39669db07dc78f1 name=kata-shim pid=1 source=shim

Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:21:36 2017
 OS/Arch:      linux/amd64
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "docker info":

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "systemctl show docker":

Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=1min
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestampMonotonic=0
StartLimitInterval=60000000
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Mon 2018-06-04 20:07:25 CST
ExecMainStartTimestampMonotonic=265338875150
ExecMainExitTimestamp=Thu 2018-06-14 15:06:30 CST
ExecMainExitTimestampMonotonic=1111283995741
ExecMainPID=13367
ExecMainCode=1
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd $DOCKER_OPTS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $DOCKER_DNS_OPTIONS $INSECURE_REGISTRY ; ignore_errors=no ; start_time=[Mon 2018-06-04 20:07:25 CST] ; stop_time=[Thu 2018-06-14 15:06:30 CST] ; pid=13367 ; code=exited ; status=0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=5201928192
TasksCurrent=32
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=GOTRACEBACK=crash DOCKER_DNS_OPTIONS=\x20\x20\x20\x20\x20\x20\x20\x20\x20--dns\x2010.192.0.3\x20\x20\x20\x20--dns\x20114.114.114.114\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20--dns-search\x20default.svc.cluster.local\x20\x20\x20\x20--dns-search\x20svc.cluster.local\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20--dns-opt\x20ndots:2\x20\x20\x20\x20--dns-opt\x20timeout:2\x20\x20\x20\x20--dns-opt\x20attempts:2\x20\x20 DOCKER_OPTS=-H\x20unix:///var/run/docker.sock\x20-H\x20tcp://172.16.10.6:2375\x20--storage-driver=overlay2\x20--storage-opt\x20overlay2.override_kernel_check=true\x20\x20--max-concurrent-downloads=10\x20--insecure-registry=10.192.0.0/12\x20--graph=/var/lib/docker\x20\x20--log-opt\x20max-size=20g\x20--log-opt\x20max-file=1\x20\x20--iptables=false
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=1048576
LimitAS=18446744073709551615
LimitNPROC=1048576
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=514539
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=basic.target
Wants=system.slice docker-storage-setup.service
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=multi-user.target kubelet.service shutdown.target
After=basic.target systemd-journald.socket network.target system.slice docker-storage-setup.service
Documentation=http://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=inactive
SubState=dead
FragmentPath=/etc/systemd/system/docker.service
DropInPaths=/etc/systemd/system/docker.service.d/docker-dns.conf /etc/systemd/system/docker.service.d/docker-options.conf
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Mon 2018-06-04 20:07:25 CST
InactiveExitTimestampMonotonic=265338875181
ActiveEnterTimestamp=Mon 2018-06-04 20:07:26 CST
ActiveEnterTimestampMonotonic=265339999474
ActiveExitTimestamp=Thu 2018-06-14 15:06:18 CST
ActiveExitTimestampMonotonic=1111271849684
InactiveEnterTimestamp=Thu 2018-06-14 15:06:30 CST
InactiveEnterTimestampMonotonic=1111283996141
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Mon 2018-06-04 20:07:25 CST
ConditionTimestampMonotonic=265338872418
AssertTimestamp=Mon 2018-06-04 20:07:25 CST
AssertTimestampMonotonic=265338872418
Transient=no

No kubectl


Packages

No dpkg
Have rpm
Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-containers-image|linux-container|qemu-)"":

qemu-vanilla-data-2.11+git.e3050471ff-41.1.x86_64
kata-shim-bin-1.0.0+git.74cbc1e-30.1.x86_64
qemu-vanilla-2.11+git.e3050471ff-41.1.x86_64
kata-runtime-1.0.0+git.086d197-41.1.x86_64
qemu-kvm-ev-2.10.0-21.el7_5.3.1.x86_64
kata-proxy-bin-1.0.0+git.a69326b-29.1.x86_64
qemu-img-ev-2.10.0-21.el7_5.3.1.x86_64
qemu-lite-2.11.0+git.6ba2bfbee9-43.1.x86_64
kata-containers-image-1.0.0-29.1.x86_64
qemu-vanilla-bin-2.11+git.e3050471ff-41.1.x86_64
kata-linux-container-4.14.22.1-130.1.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-lite-data-2.11.0+git.6ba2bfbee9-43.1.x86_64
kata-shim-1.0.0+git.74cbc1e-30.1.x86_64
qemu-kvm-common-ev-2.10.0-21.el7_5.3.1.x86_64
qemu-lite-bin-2.11.0+git.6ba2bfbee9-43.1.x86_64
kata-proxy-1.0.0+git.a69326b-29.1.x86_64

@jcvenegas jcvenegas self-assigned this Jun 14, 2018
@jcvenegas
Copy link
Member

hi @weekface, the configuration is not being updated due to we dont add/remove memory after create a pod.

The first container that is created in a pod is a pause container. That container does not have any memory limit information, so the pod is created with the default memory (default_memory = 2048) from /usr/share/defaults/kata-containers/configuration.toml.

Some work I started will help for this:

The idea is to start a VM with a minimal amount of memory an then hot-plug memory as requested and limit the containers using cgroups. If a container does not provide a memory limit the runtime will hotadd default_memory.

Then when the nginx container is created memory will be hot-plugged and a memory cgroup will be set inside the VM for that container.

This is related to #158 , I have work in progress related with this in https://github.com/jcvenegas/runtime-1/tree/memory-hotplug I will try to send the PR by the end of the week.

There are still some things to define but that is a summary of what I have in mind about limit memory.

@jamiehannaford
Copy link

@sboeuf In #388, you gave the impression that it was possible to limit the memory of a Pod with resource requests. Looking at what @jcvenegas it doesn't seem possible to do this with CRI-O - is that correct? If so, is anybody working on adding the ability to specify resource requests?

@jodh-intel
Copy link
Contributor

@sboeuf
Copy link

sboeuf commented Jun 18, 2018

@jamiehannaford you're correct. We can specify certain amount of CPUs but we don't have support for memory yet.
The reason behind this is that we don't support memory hotplug yet. There is some WIP from @bergwolf on that.
When memory hotplug will be supported, and as mentioned by @jcvenegas we will be able to start a pod with a minimal amount of RAM (128MiB), and hotplug the memory that is needed for every new container.
One limitations though, we cannot remove the memory after it has been hotplugged.

@jamiehannaford
Copy link

@sboeuf What is the default_memory value in the TOML conf? Is that just a static way of restricting how much RAM a VM (and all of its containers) can use in total?

@sboeuf
Copy link

sboeuf commented Jun 18, 2018

Default is 2048MiB. You can definitely use this to make sure you will restrict the amount of memory being used by your pods. I hope we'll have a more flexible way soon.

@miaoyq
Copy link

miaoyq commented Jul 31, 2018

Now, #470 and #303 have been merged, are we going to implement this feature(limiting the memory of pod) in the next release? @sboeuf @jcvenegas
Looking forward to this feature. :-)

@miaoyq
Copy link

miaoyq commented Jul 31, 2018

/cc @guangxuli

@sboeuf
Copy link

sboeuf commented Aug 1, 2018

Sounds good @miaoyq !

linzichang added a commit to linzichang/runtime that referenced this issue Sep 26, 2018
When create sandbox, we setup a sandbox of 128M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
linzichang added a commit to linzichang/runtime that referenced this issue Sep 26, 2018
When create sandbox, we setup a sandbox of 128M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
linzichang added a commit to linzichang/runtime that referenced this issue Oct 4, 2018
When create sandbox, we setup a sandbox of 2048M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
linzichang added a commit to linzichang/runtime that referenced this issue Oct 5, 2018
When create sandbox, we setup a sandbox of 2048M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
linzichang added a commit to linzichang/runtime that referenced this issue Oct 5, 2018
When create sandbox, we setup a sandbox of 2048M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
linzichang added a commit to linzichang/runtime that referenced this issue Oct 15, 2018
When create sandbox, we setup a sandbox of 2048M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Depends-on:github.com/kata-containers/tests#813

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
zklei pushed a commit to zklei/runtime that referenced this issue Nov 22, 2018
When create sandbox, we setup a sandbox of 2048M base memory, and
then hotplug memory that is needed for every new container. And
we change the unit of c.config.Resources.Mem from MiB to Byte in
order to prevent the 4095B < memory < 1MiB from being lost.

Depends-on:github.com/kata-containers/tests#813

Fixes kata-containers#400

Signed-off-by: Clare Chen <clare.chenhui@huawei.com>
Signed-off-by: Zichang Lin <linzichang@huawei.com>
zklei pushed a commit to zklei/runtime that referenced this issue Jun 13, 2019
proto: Split reusable structures into their own package
lifupan pushed a commit to lifupan/kata-runtime that referenced this issue Aug 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants