-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consider using minikube image build
#5739
Comments
cc: @aaron-prindle |
sounds like a good idea 👍🏼 is @aaron-prindle planning to work on this? |
See kubernetes/minikube#10330 for details |
Moving this out of current milestone. |
@tejal29 is there a workaround we can use to make skaffold work with minikube and containerd runtime until this is fixed, given this is now pushed out. |
If you don't want to use "minikube image build" but you do want to support using containerd instead of dockerd, then you need to integrate BuildKit yourself... But the default container runtime used is still Docker, and while it is - the old method should still work. Here are the needed steps (and I guess you can see why there was a need for a simpler abstraction for it): |
Any estimates on when this is being released? Thank you for the great work :) |
I created a PR that will autostart buildkit, so once it's merged and released |
That's great news :) Would it also work similarly for cri-o runtime in minikube? (skaffold dev command doesn't work right now with cri-o runtime - hoping this release would resolve that) |
The script mentioned is the interim measure, until we can upgrade to buildkit 0.9.0 which finally has proper support for systemd socket-activation* so that BuildKit will start automatically... * http://0pointer.de/blog/projects/socket-activation.html For the cri-o container runtime we are using "podman build", and the support for that should already be included (using the same socket-activation, but with a different socket for podman) docker (dockerd): Please let us know if there is anything else needed ? $ minikube image build --help
Build a container image, using the container runtime.
Examples:
minikube image build .
Options:
--build-env=[]: Environment variables to pass to the build. (format: key=value)
--build-opt=[]: Specify arbitrary flags to pass to the build. (format: key=value)
-f, --file='': Path to the Dockerfile to use (optional)
--push=false: Push the new image (requires tag)
-t, --tag='': Tag to apply to the new image (optional)
Usage:
minikube image build PATH | URL | - [flags] [options] Tried to add all features from https://skaffold.dev/docs/pipeline-stages/builders/custom/
|
@afbjorklund thank you for the info! I tried to do the custom skaffold script but was having issues running it with "skaffold dev" command. I was unable to find related issues online. It would be of great help if you can describe steps on how to create a custom skaffold.yml file for image builds with cri-o runtime that work in harmony with "skaffold dev" command. I get this issue right now when running with that command:
|
If I understand correctly, the build shell script (wrapping minikube) would look something like: #!/bin/sh
minikube image build --tag="${IMAGE}" --push=${PUSH_IMAGE:-false} ${BUILD_CONTEXT} Otherwise you would have to use "minikube podman-env" and teach skaffold to run podman. And then do the same for buildkit and buildctl, which was why we did the |
That makes sense :) For me, Skaffold dev doesn't start the custom build; it errors out directly with the previously mentioned exception. I will try to find a way to trigger custom build even with non-docker runtimes, and may be it will trigger the custom build. Then I would be able to use the "minikube image build" functionality. Thank you for the help though! much appreciated :) |
This is the same bug as in #5494, skaffold is calling Even if the cluster is not running Docker at all, like when using CRI-O or containerd as the CRI. It seems that docker is hard-coded for minikube, even when using a separate custom build script: The user doesn't even need to install Docker at all, just So I don't know why it ended up in the "docker" client at all ? It should have used a "custom" builder: https://skaffold.dev/docs/tutorials/custom-builder/ @hkdeman : But it seems that Docker is hard-coded in even more places in Skaffold, so probably needs some work... I created a ticket for minikube, to remove all dependencies on Docker: kubernetes/minikube#11059 There is no Docker client, no Docker daemon, no Docker hub. Only Kubernetes (and OCI and CRI etc). But we're also not sure it is worth it, so Docker continues to be the default runtime in minikube - for now. Most of the "alternatives" are just the same code anyway, just donated to moby or distribution or whatever. |
@afbjorklund there is way to get the "skaffold dev" command running now? Have to wait for the docker dependencies to be removed? Also, any estimations on when it might happen? (if it might happen at all) This is pretty major for us as CRI-O has become a crucial part and skaffold was already a big part of our app development cycle. If there is anyway, we can get the command running with different run-times (even a tiny hack), would be super appreciated! |
The only way (to make docker-env work) would be to start the Docker daemon: Other than that, since you want to run CRI-O, I suppose you could replace "docker-env" with "podman-env" in your Skaffold ? But these are all hacks. You can't use Docker, to talk to CRI-O. So skaffold needs to stop using EDIT: Never mind, forgot that we masked it and that minikube blocks the command - even if the unit is actually running.
❌ Exiting due to MK_USAGE: The docker-env command is only compatible with the "docker" runtime, but this cluster was configured to use the "crio" runtime. $ minikube podman-env
export CONTAINER_HOST="ssh://docker@127.0.0.1:49207/run/podman/podman.sock"
export CONTAINER_SSHKEY="/home/anders/.minikube/machines/minikube/id_rsa"
export MINIKUBE_ACTIVE_PODMAN="minikube"
# To point your shell to minikube's podman service, run:
# eval $(minikube -p minikube podman-env)
$ eval $(minikube -p minikube podman-env)
$ podman-remote images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.21.2 106ff58d4308 7 weeks ago 127 MB
k8s.gcr.io/kube-controller-manager v1.21.2 ae24db9aa2cc 7 weeks ago 121 MB
k8s.gcr.io/kube-scheduler v1.21.2 f917b8c8f55b 7 weeks ago 51.9 MB
k8s.gcr.io/kube-proxy v1.21.2 a6ebd1c1ad98 7 weeks ago 133 MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 4 months ago 31.5 MB
docker.io/kindest/kindnetd v20210326-1e038dc5 6de166512aa2 4 months ago 120 MB
k8s.gcr.io/pause 3.4.1 0f8457a4c2ec 6 months ago 690 kB
docker.io/kubernetesui/dashboard v2.1.0 9a07b5b4bfac 7 months ago 229 MB
k8s.gcr.io/coredns/coredns v1.8.0 296a6d5035e2 9 months ago 42.6 MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 11 months ago 255 MB
docker.io/kubernetesui/metrics-scraper v1.0.4 86262685d9ab 16 months ago 37 MB |
@afbjorklund I tried the podman-remote commands you sent and that makes sense but not sure if I understand how this allows "skaffold dev" command to run? I did the unmasking and docker start but when running "skaffold dev", it still errors. I might not be sure how this works fully yet, let me do some research on my side. Thank you for the help though! Also, what did you mean by: "you could replace "docker-env" with "podman-env" in your Skaffold" ? |
Since Skaffold always calls "docker-env", you might be able to replace the string with "podman-env": cmd, err := cluster.GetClient().MinikubeExec("docker-env", "--shell", "none", "-p", minikubeProfile)
if err != nil {
return nil, fmt.Errorf("executing minikube command: %w", err)
}
out, err := util.RunCmdOut(cmd)
if err != nil {
return nil, fmt.Errorf("getting minikube env: %w", err)
} But you also have to replace "docker" with "podman" in your PATH, or it will try to run things locally... That is: the docker commands that were supposed to be run in the minikube cluster will run on host. |
If you are using containerd, there is currently no minikube alternative to "docker-env" and "podman-env". containerd upstream says that if you wanted something user-friendly, you should have stuck with docker... |
@afbjorklund I got it partially working! I ran minikube with "docker" driver and "cri-o" run time. Then, I did the "docker-env" replacement with "podman-env" in the skaffold source code and built locally. The "skaffold dev" now runs fine (with local skaffold build). Although, it does publish the new built images to a different registry and thus is unable to run those built images. I am getting the error of it unable to find that image with that particular random tag. Also, I noticated that it doesn't actually run the custom build, it just finds the Dockerfile and creates that. Not sure why that is happening. I also tried to build those images in minikube registry context (with podman-env) but that skaffold doens't find those images either. Any ideas on how I can get skaffold to find those built images? I did the alias for docker -> podman but that didn't seem to work either. |
I have no idea how Skaffold locates images, but if it only searches Docker it will not be able to find images available for CRI-O However the images should be readily available for use as Kubernetes pod images, which is what minikube is focusing on...
What does this mean ? Did you use --push, and a registry ? Minikube by default doesn't use any local registry, just what is available in the local container runtime and the external registries. It is possible to deploy a registry in the cluster and access it using |
When the "skaffold dev" command runs, it actually does the build perfectly and tags them. (Not push it to a registry - my bad) (it just saves it the context I think). Not sure if you know why custom build doesn't work, may be that is the reason why it is unable to find right now. In custom build i use "minikube build' commands instead of "docker/podman build". I will try exploring this soon and will let you know if I find a solution. Thank you for the help so far! |
Thanks for looking into it. Anything built with You can also add |
I'm afraid it is worse than just the script, running a local builder requires Docker and results must be in Docker as well.
Once the custom script has been completed, it assumes that the image is saved and loaded into (the local) Docker:
if b.pushImages {
return docker.RemoteDigest(tag, b.cfg)
}
imageID, err := b.localDocker.ImageID(ctx, tag)
if err != nil {
return "", err
}
if imageID == "" {
return "", fmt.Errorf("the custom script didn't produce an image with tag [%s]", tag)
} That is, skaffold is hard-coded to use Docker and only Docker and must be rewritten to support containerd (or cri-o) A workaround might be to use the But it seems that we need a new command (or perhaps Not sure why the documenttion says "digest", those are only available after pushing - not when running locally:
But getting the "id", that can be done without using Docker. Basically by filtering in current ListImages API. We might {
"id": "sha256:2e4e39e6869bb1ad6ee6c355054294c8ef61c2ff68d881817c8f2debd69ff0cc",
"repoTags": [
"docker.io/library/skaffold-example:v1.29.0-17-g84367d9a5-dirty"
],
"repoDigests": [
],
"size": "3929089",
"uid": null,
"username": "",
"spec": null
} EDIT: Seems that the only difference that would make, is that Skaffold would fail further down:
|
For hardcoding cri-o, this part would need replacing: imageID, err := b.localDocker.ImageID(ctx, tag) Probably with a similar function, calling podman --remote images --format '{{.Id}}' $tag The libpod client API is somewhat unstable, so everyone uses podman command CLI. https://podman.io/blogs/2020/08/10/podman-go-bindings.html Like so: func podmanImageID(ctx context.Context, tag string) (string, error) {
cmd := exec.Command("podman", "--remote", "images", "--format", "{{.Id}}", tag)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return "", err
}
return strings.TrimSuffix(out.String(), "\n"), nil
} To replace the docker-only variant:
$ eval $(minikube podman-env)
$ podman --remote images --format '{{.Id}}' skaffold-example:v1.29.0-17-g84367d9a5-dirty
7eabd71369b09e438d829ccca0b6ccf46b1bcb6ebb56c8c77049f94a3a97636d But we should probably wrap it with PS: The "localhost/" prefix on unqualified image names (ones without a registry) Fortunately it is able to find the images anyway, even with those idiosyncrasies. Another approach would be to just give up, and run Podman using the Docker client:
It wouldn't help with the containerd runtime, but a workaround for the cri-o runtime... |
@afbjorklund the custom script works now (it was some cache issue). Although, I reached the same issue as you mentioned earlier with "custom script didn't produce an image with tag". I understand what is that you are trying to say regarding change the source code to take the image id from podman instead of localDocker. I, unfortunately, don't have the time to get that done now, but whenever I do I will try to tackle that. I am not sure if I understood your other approach though with changing the symbolic links. The "$CONTAINER_SSH_KEY" is empty for me and not sure how you got it. This is really great information though, much appreciated! |
@hkdeman : podman uses two different variables, host and sshkey: $ minikube start --container-runtime=cri-o
...
$ minikube podman-env
export CONTAINER_HOST="ssh://docker@127.0.0.1:49162/run/podman/podman.sock"
export CONTAINER_SSHKEY="/home/anders/.minikube/machines/minikube/id_rsa"
export MINIKUBE_ACTIVE_PODMAN="minikube"
# To point your shell to minikube's podman service, run:
# eval $(minikube -p minikube podman-env) docker unfortunately doesn't have a variable for the key: $ minikube start --container-runtime=docker
...
$ minikube docker-env --ssh-host --ssh-add
export DOCKER_HOST="ssh://docker@127.0.0.1:49167"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env --ssh-host)
Identity added: /home/anders/.minikube/machines/minikube/id_rsa (/home/anders/.minikube/machines/minikube/id_rsa)``` so the regular OpenSSH infrastructure is used instead This is when using SSH, instead of the (legacy) TCP. Docker socket location is hardcoded, thus the workarounds.
But with the workarounds, Podman should answer Docker: $ docker version
Client: Docker Engine - Community
Version: 20.10.8
API version: 1.40
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:54:27 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: linux/amd64/ubuntu-20.04
Podman Engine:
Version: 3.2.2
APIVersion: 3.2.2
Arch: amd64
BuildTime: 1970-01-01T00:00:00Z
Experimental: true
GitCommit:
GoVersion: go1.15.2
KernelVersion: 5.4.0-80-generic
MinAPIVersion: 3.1.0
Os: linux
Engine:
Version: 3.2.2
API version: 1.40 (minimum version 1.24)
Go version: go1.15.2
Git commit:
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
Experimental: true
And it should show same images as used by CRI-O. But this workaround won't work for containerd anyway, so I guess the only way for Skaffold is to save and load it:
That means that there will be two copies now, one in the Kubernetes cluster and one in the local Docker daemon. So in that case it would probably be better to build with the local Docker daemon to start with, instead of in the cluster.
The recommendation is to continue with the Docker container runtime until Kubernetes 1.23 Maybe Skaffold has come up with a Kubernetes-native method by then, or maybe by using Kaniko ? The image would still have to be copied around when using a registry, but "only" inside the cluster. This would be needed anyway when running a multi-node cluster, copying the image around that is. |
See #5773 for how to get an image built in the local Docker daemon over to a minikube Kubernetes cluster using containerd.
When not using a registry that is, otherwise it would just push and pull as usual. (i.e. docker pushes and containerd pulls) |
@aaron-prindle did we discuss this in last week's triage meeting? |
No, I put it as a topic for this week's meeting (8/30/2021) |
|
minikube has a new feature called image build that would work on every runtime and would not need to build the image using docker
this is a new feature consider using this
https://minikube.sigs.k8s.io/docs/handbook/pushing/#8-building-images-to-in-cluster-container-runtime
@spowelljr will be generating new benchmarks which will be available here https://minikube.sigs.k8s.io/docs/benchmarks/imagebuild/
The text was updated successfully, but these errors were encountered: