Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate container runtimes from the image #9989

Closed
afbjorklund opened this issue Dec 17, 2020 · 15 comments
Closed

Separate container runtimes from the image #9989

afbjorklund opened this issue Dec 17, 2020 · 15 comments
Assignees
Labels
area/build-release co/runtime/containerd co/runtime/crio CRIO related issues co/runtime/docker Issues specific to a docker runtime kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 17, 2020

Currently we include all three CRI runtimes, in the base image:

  • Docker (docker)
  • CRI-O (crio + podman)
  • Containerd (containerd + buildkit)

This makes the image somewhat big, compared to having just one.

ISO

graph-size

KIC

installed-size

We only start one of them, but the others still takes up space on the disk.

(actually we always start docker, as long as using the "docker machine" API...
but we stop it again quickly afterwards, if using the containerd or cri-o runtimes)


One alternative to doing three "base" images, would be to use the "provisioner".
This is a docker-machine feature, that is used to install docker on the machine.

https://github.com/docker/machine/tree/master/libmachine/provision

So the base image wouldn't have any of them (maybe runc) from the start.
Instead they would be downloaded and installed from separate tarballs/packages.

This is how we currently handle different kubernetes versions, for instance.

  • linux/v1.20.0/kubeadm
  • linux/v1.20.0/kubectl
  • linux/v1.20.0/kubelet

It's also how we handle the preload, instead of making three "node" images:

  • preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4
  • preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
  • preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4

As compared with kind:

  • docker.io/kindest/node:v1.20.0 (always uses containerd-overlay2)
@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. co/runtime/docker Issues specific to a docker runtime co/runtime/crio CRIO related issues co/runtime/containerd labels Dec 17, 2020
@afbjorklund
Copy link
Collaborator Author

We would need an installation script for CRI-O, similar to get.docker.com for Docker...

if ! type docker; then curl -sSL https://get.docker.com | sh -; fi

See cri-o/cri-o#4343

We also need one for containerd (and buildkit), but maybe we can do it ourselves ?

@afbjorklund
Copy link
Collaborator Author

@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Dec 23, 2020
@afbjorklund
Copy link
Collaborator Author

This includes switching from github.com/docker/machine/libmachine/provision to k8s.io/minikube/pkg/provision

This always installs docker, by calling the configured get.docker.com script version (InstallURL):

https://github.com/docker/machine/blob/7d42fed1b770/libmachine/provision/ubuntu_systemd.go#L99

This currently doesn't provision anything, but assume that everything is in the "base" image:

https://github.com/kubernetes/minikube/blob/v1.16.0/pkg/provision/ubuntu.go#L164

The forked provisioner (in minikube), will need to make sure that runtime is getting installed (from a package).

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Dec 27, 2020

Note that Buildroot doesn't have any package manager (like dpkg), by design:
https://buildroot.org/downloads/manual/manual.html#faq-no-binary-packages

The easiest workaround is probably doing something like the preloaded images.
That is, make a separate buildroot build - that creates binary archives (tarballs) ?

docker-bin

crio-bin
conmon
podman

containerd-bin
buildkit-bin
# common
runc-master
crictl-bin
cni
cni-plugins

Then these tarballs need to be hosted somewhere, again like the preloaded images...

Most likely the same goes for the packages, should they "disappear" upstream: #9552

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Dec 27, 2020

Here are the matching sections, that would be removed from the kicbase:

clean-install docker-ce docker-ce-cli containerd.io
echo "Installing buildkit ..."

clean-install libglib2.0-0
clean-install containers-common catatonit conmon containernetworking-plugins cri-tools podman-plugins
clean-install cri-o cri-o-runc
clean-install podman

Note that docker and containerd are sharing the containerd and runc here.

The following additional packages will be installed:
  apparmor containers-golang containers-image crun docker-ce-rootless-extras file libglib2.0-data libgpgme11 libltdl7 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib libpython3.8-minimal
  libpython3.8-stdlib libyajl2 mime-support python3 python3-minimal python3.8 python3.8-minimal shared-mime-info slirp4netns uidmap varlink xdg-user-dirs xz-utils
Suggested packages:
  apparmor-profiles-extra apparmor-utils aufs-tools cgroupfs-mount | cgroup-lite python3-doc python3-tk python3-venv python3.8-venv python3.8-doc binutils binfmt-support
The following NEW packages will be installed:
  apparmor buildkit catatonit conmon containerd.io containernetworking-plugins containers-common containers-golang containers-image cri-o cri-o-runc cri-tools crun docker-ce docker-ce-cli
  docker-ce-rootless-extras file libglib2.0-0 libglib2.0-data libgpgme11 libltdl7 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib libpython3.8-minimal libpython3.8-stdlib libyajl2 mime-support
  podman podman-plugins python3 python3-minimal python3.8 python3.8-minimal shared-mime-info slirp4netns uidmap varlink xdg-user-dirs xz-utils
0 upgraded, 41 newly installed, 0 to remove and 2 not upgraded.
Need to get 179 MB of archives.
After this operation, 806 MB of additional disk space will be used.

@sharifelgamal
Copy link
Collaborator

@azhao155 are you interested in looking at this?

@azhao155
Copy link
Contributor

/assign @azhao155

@medyagh
Copy link
Member

medyagh commented Dec 13, 2021

I think we might not to do this, as it would add a lot of CI complexiites in building and pushing

@afbjorklund
Copy link
Collaborator Author

The complexity is already there, since upstreams have issues providing consistent packaging

Installing Docker is easy, but adding cri-o or cri-dockerd or containerd is a pain to maintain...

@afbjorklund afbjorklund modified the milestones: 1.26.0, 1.27.0-candidate Feb 22, 2022
@afbjorklund
Copy link
Collaborator Author

This feature will not make it to the ISO and KIC targets, in time for the 1.26 release

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 22, 2022
@medyagh medyagh added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jun 27, 2022
@medyagh
Copy link
Member

medyagh commented Jun 27, 2022

while this is still a good idea, I dont see enough bandiwith to handle the infrastructure part of this task

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build-release co/runtime/containerd co/runtime/crio CRIO related issues co/runtime/docker Issues specific to a docker runtime kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants