-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate container runtimes from the image #9989
Comments
We would need an installation script for CRI-O, similar to get.docker.com for Docker...
See cri-o/cri-o#4343 We also need one for containerd (and buildkit), but maybe we can do it ourselves ? |
For containerd, it will probably use the binaries from the github releases: But only for |
This includes switching from github.com/docker/machine/libmachine/provision to k8s.io/minikube/pkg/provision This always installs docker, by calling the configured get.docker.com script version (InstallURL): This currently doesn't provision anything, but assume that everything is in the "base" image: https://github.com/kubernetes/minikube/blob/v1.16.0/pkg/provision/ubuntu.go#L164 The forked provisioner (in minikube), will need to make sure that runtime is getting installed (from a package). |
Note that Buildroot doesn't have any package manager (like dpkg), by design: The easiest workaround is probably doing something like the preloaded images.
Then these tarballs need to be hosted somewhere, again like the preloaded images... Most likely the same goes for the packages, should they "disappear" upstream: #9552 |
Here are the matching sections, that would be removed from the kicbase:
Note that docker and containerd are sharing the containerd and runc here.
|
@azhao155 are you interested in looking at this? |
/assign @azhao155 |
I think we might not to do this, as it would add a lot of CI complexiites in building and pushing |
The complexity is already there, since upstreams have issues providing consistent packaging Installing Docker is easy, but adding cri-o or cri-dockerd or containerd is a pain to maintain... |
This feature will not make it to the ISO and KIC targets, in time for the 1.26 release |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
while this is still a good idea, I dont see enough bandiwith to handle the infrastructure part of this task |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently we include all three CRI runtimes, in the base image:
This makes the image somewhat big, compared to having just one.
ISO
KIC
We only start one of them, but the others still takes up space on the disk.
(actually we always start docker, as long as using the "docker machine" API...
but we stop it again quickly afterwards, if using the containerd or cri-o runtimes)
One alternative to doing three "base" images, would be to use the "provisioner".
This is a docker-machine feature, that is used to install docker on the machine.
https://github.com/docker/machine/tree/master/libmachine/provision
So the base image wouldn't have any of them (maybe
runc
) from the start.Instead they would be downloaded and installed from separate tarballs/packages.
This is how we currently handle different kubernetes versions, for instance.
It's also how we handle the preload, instead of making three "node" images:
As compared with kind:
docker.io/kindest/node:v1.20.0
(always uses containerd-overlay2)The text was updated successfully, but these errors were encountered: