Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

Kata Containers and CRI (containerd plugin) with Kubernetes #373

Closed
newtonjose opened this issue Jun 5, 2018 · 17 comments
Closed

Kata Containers and CRI (containerd plugin) with Kubernetes #373

newtonjose opened this issue Jun 5, 2018 · 17 comments

Comments

@newtonjose
Copy link

Description of problem

I just flow this tutorial https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-kubelet-to-use-containerd
and everything occurred OK, but when a use this command: kubectl exec busybox -c busybox ls
I get Error from server: error dialing backend: tls: oversized record received with length 20527

The same problem: kubectl run -i -t bb1 --image=busybox --restart=Never
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: error dialing backend: tls: oversized record received with length 20527

But all pods are running:

kubectl get pods
NAME READY STATUS RESTARTS AGE
bb1 1/1 Running 0 54s
busybox 1/1 Running 0 20m**

System Infos

kata-runtime kata-env
[Meta]
Version = "1.0.12"

[Runtime]
Debug = false
[Runtime.Version]
Semver = "1.0.0"
Commit = ""
OCI = "1.0.1"
[Runtime.Config]
Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
MachineType = "pc"
Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
Path = "/usr/bin/qemu-lite-system-x86_64"
BlockDeviceDriver = "virtio-scsi"
Msize9p = 8192
Debug = false

[Image]
Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_agent_a099747.img"

[Kernel]
Path = "/usr/share/kata-containers/vmlinuz-4.14.22.1-128.container"
Parameters = ""

[Initrd]
Path = ""

[Proxy]
Type = "kataProxy"
Version = "kata-proxy version 1.0.0"
Path = "/usr/libexec/kata-containers/kata-proxy"
Debug = false

[Shim]
Type = "kataShim"
Version = "kata-shim version 1.0.0"
Path = "/usr/libexec/kata-containers/kata-shim"
Debug = false

[Agent]
Type = "kata"

[Host]
Kernel = "4.4.0-127-generic"
Architecture = "amd64"
VMContainerCapable = false
[Host.Distro]
Name = "Ubuntu"
Version = "16.04"
[Host.CPU]
Vendor = "GenuineIntel"
Model = "Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz"

kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:11:19 2017
OS/Arch: linux/amd64

Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:09:53 2017
OS/Arch: linux/amd64
Experimental: false

@jodh-intel
Copy link
Contributor

Hi @n3wt0nSAN - thanks for reporting. Would you be able to paste the output of kata-collect-data.sh into a comment on this issue as requested in the template? Please review the output first but that would give us a lot more info (particularly if you also enabled full debug before running the failing command).

@jcvenegas
Copy link
Member

hey @n3wt0nSAN thanks for open the issue, I will try to reproduce it today.

@newtonjose
Copy link
Author

Thanks for your reply @jodh-intel, this is my kata-collect-data:

**./kata-collect-data.sh.in

Meta details

Running kata-collect-data.sh.in version 1.0' (commit @COMMIT@) at 2018-06-05.14:41:32.892559206-0300.


Runtime is /usr/bin/kata-runtime.

kata-env

Output of "/usr/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.12"

[Runtime]
  Debug = false
  [Runtime.Version]
    Semver = "1.0.0"
    Commit = ""
    OCI = "1.0.1"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  Msize9p = 8192
  Debug = false

[Image]
  Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_agent_a099747.img"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-4.14.22.1-128.container"
  Parameters = "agent.log=debug agent.log=debug"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.0.0"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = true

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.0.0"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = true

[Agent]
  Type = "kata"

[Host]
  Kernel = "4.4.0-127-generic"
  Architecture = "amd64"
  VMContainerCapable = false
  [Host.Distro]
    Name = "Ubuntu"
    Version = "16.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Core(TM) i7 CPU         870  @ 2.93GHz"

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file @DESTCONFIG@ not found
Config file @DESTSYSCONFIG@ not found
Config file /etc/kata-containers/configuration.toml not found
Config file /etc/@PROJECT_TAG@/configuration.toml not found
Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = " agent.log=debug agent.log=debug"

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or 
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Config file /usr/share/defaults/@PROJECT_TAG@/configuration.toml not found


Image details

No image


Initrd details

No initrd


Logfiles

Runtime logs

No recent runtime problems found in system journal.

Proxy logs

No recent proxy problems found in system journal.

Shim logs

No recent shim problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:	17.12.0-ce
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	c97c6d6
 Built:	Wed Dec 27 20:11:19 2017
 OS/Arch:	linux/amd64

Server:
 Engine:
  Version:	17.12.0-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	c97c6d6
  Built:	Wed Dec 27 20:09:53 2017
  OS/Arch:	linux/amd64
  Experimental:	false

Output of "docker info":

Containers: 7
 Running: 0
 Paused: 0
 Stopped: 7
Images: 14
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: kata-runtime runc
Default Runtime: kata-runtime
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: <<unknown>> (expected: b2567b37d7b75eb4cf325b77297b140ea686ce8f)
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-127-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.786GiB
Name: softway-node1
ID: THLD:RMSS:S724:FWYB:72XL:SPDR:PFFM:4RGX:ZALD:3JQY:PFLZ:IWWS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 21
 Goroutines: 33
 System Time: 2018-06-05T14:41:33.037410546-03:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Output of "systemctl show docker":

Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Tue 2018-06-05 09:51:07 -03
WatchdogTimestampMonotonic=27877698
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1135
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Tue 2018-06-05 09:50:55 -03
ExecMainStartTimestampMonotonic=16013241
ExecMainExitTimestampMonotonic=0
ExecMainPID=1135
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime ; ignore_errors=no ; start_time=[Tue 2018-06-05 09:50:55 -03] ; stop_time=[n/a] ; pid=1135 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=105377792
CPUUsageNSec=78431534333
TasksCurrent=50
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=HTTP_PROXY= HTTPS_PROXY= NO_PROXY=
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=18446744073709551615
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=18446744073709551615
LimitNPROCSoft=18446744073709551615
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=31734
LimitSIGPENDINGSoft=31734
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket system.slice sysinit.target
Wants=network-online.target
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=docker.socket systemd-journald.socket firewalld.service sysinit.target network-online.target system.slice basic.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
DropInPaths=/etc/systemd/system/docker.service.d/kata-containers.conf /etc/systemd/system/docker.service.d/proxy.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2018-06-05 09:51:07 -03
StateChangeTimestampMonotonic=27877699
InactiveExitTimestamp=Tue 2018-06-05 09:50:55 -03
InactiveExitTimestampMonotonic=16013267
ActiveEnterTimestamp=Tue 2018-06-05 09:51:07 -03
ActiveEnterTimestampMonotonic=27877699
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2018-06-05 09:50:55 -03
ConditionTimestampMonotonic=16012785
AssertTimestamp=Tue 2018-06-05 09:50:55 -03
AssertTimestampMonotonic=16012785
Transient=no
StartLimitInterval=60000000
StartLimitBurst=3
StartLimitAction=none

Have kubectl

Kubernetes

Output of "kubectl version":

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Output of "kubectl config view":

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://*.*.*.*:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Output of "systemctl show kubelet":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Tue 2018-06-05 14:18:53 -03
WatchdogTimestampMonotonic=16093608412
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=6409
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Tue 2018-06-05 14:18:53 -03
ExecMainStartTimestampMonotonic=16093608302
ExecMainExitTimestampMonotonic=0
ExecMainPID=6409
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[Tue 2018-06-05 14:18:53 -03] ; stop_time=[n/a] ; pid=6409 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=40972288
CPUUsageNSec=74478953677
TasksCurrent=23
Delegate=no
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=KUBELET_EXTRA_ARGS=--container-runtime=remote\x20--runtime-request-timeout=15m\x20--container-runtime-endpoint=unix:///run/containerd/containerd.sock KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf\x20--kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni\x20--cni-conf-dir=/etc/cni/net.d\x20--cni-bin-dir=/opt/cni/bin KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10\x20--cluster-domain=cluster.local KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook\x20--client-ca-file=/etc/kubernetes/pki/ca.crt KUBELET_CADVISOR_ARGS=--cadvisor-port=0 KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true\x20--cert-dir=/var/lib/kubelet/pki HTTP_PROXY= HTTPS_PROXY= NO_PROXY=
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=0
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=4096
LimitNOFILESoft=1024
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=31734
LimitNPROCSoft=31734
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=31734
LimitSIGPENDINGSoft=31734
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=basic.target system.slice sysinit.target systemd-journald.socket
Documentation=http://kubernetes.io/docs/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/0-cri-containerd.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/proxy.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2018-06-05 14:18:53 -03
StateChangeTimestampMonotonic=16093608414
InactiveExitTimestamp=Tue 2018-06-05 14:18:53 -03
InactiveExitTimestampMonotonic=16093608414
ActiveEnterTimestamp=Tue 2018-06-05 14:18:53 -03
ActiveEnterTimestampMonotonic=16093608414
ActiveExitTimestamp=Tue 2018-06-05 14:18:43 -03
ActiveExitTimestampMonotonic=16083508752
InactiveEnterTimestamp=Tue 2018-06-05 14:18:53 -03
InactiveEnterTimestampMonotonic=16093591676
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2018-06-05 14:18:53 -03
ConditionTimestampMonotonic=16093591768
AssertTimestamp=Tue 2018-06-05 14:18:53 -03
AssertTimestampMonotonic=16093591769
Transient=no
StartLimitInterval=0
StartLimitBurst=5
StartLimitAction=none

Have crio
Output of "crio --version":

crio version 1.11.0-dev
commit: "1c0c3b0778f805b82970cdbc93529306f8f75e61-dirty"

Output of "systemctl show crio":

Type=simple
Restart=on-failure
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Tue 2018-06-05 09:50:51 -03
WatchdogTimestampMonotonic=11917209
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1000
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Tue 2018-06-05 09:50:51 -03
ExecMainStartTimestampMonotonic=11917172
ExecMainExitTimestampMonotonic=0
ExecMainPID=1000
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/local/bin/crio ; argv[]=/usr/local/bin/crio --log-level debug ; ignore_errors=no ; start_time=[Tue 2018-06-05 09:50:51 -03] ; stop_time=[n/a] ; pid=1000 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/crio.service
MemoryCurrent=33329152
CPUUsageNSec=1399742460
TasksCurrent=20
Delegate=no
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=0
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=4096
LimitNOFILESoft=1024
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=31734
LimitNPROCSoft=31734
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=31734
LimitSIGPENDINGSoft=31734
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=crio.service
Names=crio.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=basic.target system.slice sysinit.target systemd-journald.socket
Documentation=https://github.com/kubernetes-incubator/cri-o
Description=OCI-based implementation of Kubernetes Container Runtime Interface
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/etc/systemd/system/crio.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2018-06-05 09:50:51 -03
StateChangeTimestampMonotonic=11917210
InactiveExitTimestamp=Tue 2018-06-05 09:50:51 -03
InactiveExitTimestampMonotonic=11917210
ActiveEnterTimestamp=Tue 2018-06-05 09:50:51 -03
ActiveEnterTimestampMonotonic=11917210
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2018-06-05 09:50:51 -03
ConditionTimestampMonotonic=11916627
AssertTimestamp=Tue 2018-06-05 09:50:51 -03
AssertTimestampMonotonic=11916627
Transient=no
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none

Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|@PROJECT_TYPE@-proxy|@PROJECT_TYPE@-runtime|@PROJECT_TYPE@-shim|@PROJECT_TYPE@-containers-image|linux-container|qemu-)"":

ii  kata-linux-container                4.14.22.1-128                              amd64        linux kernel optimised for container-like workloads.
ii  qemu-lite                           2.11.0+git.6ba2bfbee9-42                   amd64        linux kernel optimised for container-like workloads.
ii  qemu-vanilla                        2.11+git.e3050471ff-40                     amd64        linux kernel optimised for container-like workloads.

No rpm

---**

@newtonjose
Copy link
Author

Hi, I found the problem, my hardware isn't capable:

kata-runtime kata-check
INFO[0000] CPU property found description="Intel Architecture CPU" name=GenuineIntel pid=1695 source=runtime type=attribute
INFO[0000] CPU property found description="Virtualization support" name=vmx pid=1695 source=runtime type=flag
INFO[0000] CPU property found description="64Bit CPU" name=lm pid=1695 source=runtime type=flag
INFO[0000] CPU property found description=SSE4.1 name=sse4_1 pid=1695 source=runtime type=flag
INFO[0000] kernel property found description="Kernel-based Virtual Machine" name=kvm pid=1695 source=runtime type=module
INFO[0000] kernel property found description="Host kernel accelerator for virtio" name=vhost pid=1695 source=runtime type=module
INFO[0000] kernel property found description="Host kernel accelerator for virtio network" name=vhost_net pid=1695 source=runtime type=module
INFO[0000] kernel property found description="Intel KVM" name=kvm_intel pid=1695 source=runtime type=module
WARN[0000] kernel module parameter has unexpected value description="Intel KVM" expected=Y name=kvm_intel parameter=nested pid=1695 source=runtime type=module value=N
ERRO[0000] kernel module parameter has unexpected value description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1695 source=runtime type=module value=N
INFO[0000] Kernel property value correct description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1695 source=runtime type=module value=N
ERRO[0000] ERROR: System is not capable of running Kata Containers name=kata-runtime pid=1695 source=runtime
ERROR: System is not capable of running Kata Containers

Anyone can give me explain, thanks in advanced.

@grahamwhaley
Copy link
Contributor

So, the key bit would look like:

WARN[0000] kernel module parameter has unexpected value description="Intel KVM" expected=Y name=kvm_intel parameter=nested pid=1695 source=runtime type=module value=N
ERRO[0000] kernel module parameter has unexpected value description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1695 source=runtime type=module value=N
INFO[0000] Kernel property value correct description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1695 source=runtime type=module value=N

And above in the kata-env I see:

[Host]
Kernel = "4.4.0-127-generic"
Architecture = "amd64"
VMContainerCapable = false

It looks like you don't have VM support enabled. It looks like you are on Ubuntu bare metal, yes?
I suspect you need an extra package - but, off the top of my head I don't know which - any thoughts @jcvenegas @chavafg before @jodh-intel gets to this in the (GMT) morning.

@jodh-intel
Copy link
Contributor

I'd check your bios settings to ensure you've got the Intel Virtualisation extensions enabled (VT-x).

You could also try the following:

$ sudo modprobe -r kvm_intel && sudo modprobe kvm_intel nested=1
$ [ $(cat /sys/module/kvm_intel/parameters/nested) = "Y" ] && echo ok

@bergwolf
Copy link
Member

bergwolf commented Jun 6, 2018

@jodh-intel Any reason why nested VT is required to run kata containers?

@jodh-intel
Copy link
Contributor

It isn't essential, but enables running a hypervisor inside a Kata Container. Since we're aiming to make a Kata Container "transparent", having nesting enabled will avoid potential surprises if a user were to try that.

@bergwolf
Copy link
Member

bergwolf commented Jun 6, 2018

@jodh-intel Thanks, I see.

@newtonjose
Copy link
Author

newtonjose commented Jun 8, 2018

Hello again, @jodh-intel I follow your recommendations. On my bios, I set VT-x enabled and run the commands:

sudo modprobe -r kvm_intel && sudo modprobe kvm_intel nested=1
[ $(cat /sys/module/kvm_intel/parameters/nested) = "Y" ] && echo ok
ok

but the same error

kata-runtime kata-check
INFO[0000] CPU property found description="Intel Architecture CPU" name=GenuineIntel pid=1388 source=runtime type=attribute
INFO[0000] CPU property found description="Virtualization support" name=vmx pid=1388 source=runtime type=flag
INFO[0000] CPU property found description="64Bit CPU" name=lm pid=1388 source=runtime type=flag
INFO[0000] CPU property found description=SSE4.1 name=sse4_1 pid=1388 source=runtime type=flag
INFO[0000] kernel property found description="Kernel-based Virtual Machine" name=kvm pid=1388 source=runtime type=module
INFO[0000] kernel property found description="Host kernel accelerator for virtio" name=vhost pid=1388 source=runtime type=module
INFO[0000] kernel property found description="Host kernel accelerator for virtio network" name=vhost_net pid=1388 source=runtime type=module
INFO[0000] kernel property found description="Intel KVM" name=kvm_intel pid=1388 source=runtime type=module
ERRO[0000] kernel module parameter has unexpected value description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1388 source=runtime type=module value=N
INFO[0000] Kernel property value correct description="Intel KVM" expected=Y name=kvm_intel parameter=unrestricted_guest pid=1388 source=runtime type=module value=N
INFO[0000] Kernel property value correct description="Intel KVM" expected=Y name=kvm_intel parameter=nested pid=1388 source=runtime type=module value=Y
ERRO[0000] ERROR: System is not capable of running Kata Containers name=kata-runtime pid=1388 source=runtime
ERROR: System is not capable of running Kata Containers

thats is a problem with my hardware?

@newtonjose newtonjose reopened this Jun 8, 2018
@jcvenegas
Copy link
Member

jcvenegas commented Jun 8, 2018

Seems that the kata-runtime did not detected that kvm is loaded.
Could you check manually if the modules is loaded?

lsmod | grep kvm

@newtonjose
Copy link
Author

lsmod | grep kvm

kvm_intel 172032 0
kvm 548864 1 kvm_intel
irqbypass 16384 1 kvm

@jodh-intel
Copy link
Contributor

Looking at the logs again, I see that although your system is VT-x capable...

... it lacks unrestricted_guest support:

We check for this to ensure the CPU is "new enough" to run a Kata Container. See:

So, I'm afraid in summary that it appears that your system is too old to work with Kata.

@newtonjose
Copy link
Author

Thats was my concern. I'm trying on new processor and that's worked fine.

Thanks a lot!

@jodh-intel
Copy link
Contributor

Hi @n3wt0nSAN - great - glad to hear you have access to another system and it's working for you 😄

@newtonjose
Copy link
Author

Hey everyone, a few days, my boss asked viability to do nested virtualization with KVM. I believe it possible with limitations. But in my solution I want to use Kata Containers with QEMU / KVM.

Is there any way to use the Kata Containers scenario, need to run inside a nested VM?

TIA.

@sboeuf
Copy link

sboeuf commented Jun 21, 2018

@n3wt0nSAN you can definitely run Kata inside a VM. As you mentioned, the performance is not as good as baremetal, but we use it all the time in case of our CI, so it should work out of the box for you.

zklei pushed a commit to zklei/runtime that referenced this issue Jun 13, 2019
Fixes kata-containers#373

Bump runtime-spec version to "5806c35637336642129d03657419829569abc5aa"

Change logs:

    5d9aa69 config-linux: Add Intel RDT/MBA Linux support
    6f5fcd4 Support for network namespace in windows
    06cf899 config-linux: add Intel RDT CLOS name sharing support
    f3be7d2 config: clarify source mount
    65fac2b Fix camelCasing on idType to align with other Windows spec conventions
    da8adc9 incorporating edits from JTerry's feedback
    c182ebc meeting: Bump July meeting from the 4th to the 11th
    e01b694 config: Add Windows Devices to Schema
    81d81f3 docs: Added kata-runtime to implementations
    b0700ad Add gVisor to the implementations list
    9e459a6 .travis.yml: Get schema dependencies in before_install
    692abcb .travis: Bump minimum Go version to 1.9
    fd39559 config: Clarify execution environment for hooks
    cd9892d config-linux: Drop console(4) reference
    e662e5c Linux devices: uid/gid relative to container
    74b670e config: Add VM-based container configuration section
    cd39042 uidMappings: change order of fields for clarity
    2e241f7 specs-go/config: Define RDMA cgroup
    9df387e schema/Makefile: fix test
    de688f2 config: Fix Linux mount options links
    ef008dd glossary: Bump JSON spec to RFC 8259
    4e5a137 schema: Completely drop our JSON Schema 'id' properties
    70ba4e6 meeting: Bump January meeting from the 3rd to the 10th
    8558116 schema: add allowed values for defaultAction
    5d9bbad config: Dedent root paragraphs, since they aren't a list entry
    e566cf6 version: put master back to -dev
    966a58d fix the link to hook

Signed-off-by: Wei Zhang <zhangwei555@huawei.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants