Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support k3s in --docker mode #3654

Open
leoluk opened this issue Aug 3, 2020 · 10 comments
Open

Support k3s in --docker mode #3654

leoluk opened this issue Aug 3, 2020 · 10 comments
Labels
enhancement New feature or request

Comments

@leoluk
Copy link

leoluk commented Aug 3, 2020

k3s can use a local Docker daemon in --docker mode.

This is similar to Docker for Desktop mode, where built images are immediately available in the cluster.

Currently, this isn't properly detected.

@jazzdan jazzdan added the bug Something isn't working label Aug 3, 2020
@nicks
Copy link
Member

nicks commented Aug 3, 2020

Hmmm....can you link to documentation on this?

This is the first I've heard of this, and we do talk to the K3d team about this sort of interop from time to time.

@nicks
Copy link
Member

nicks commented Aug 3, 2020

(In general, what we would do in this case is ask the K3s team to provide some sort of protocol for determining when K3s is in this mode, like a ConfigMap created by the cluster, and then Tilt would read this ConfigMap...so I would need to read more about how they're advertising/documenting this)

@leoluk
Copy link
Author

leoluk commented Aug 3, 2020

Here's the documentation: https://rancher.com/docs/k3s/latest/en/advanced/#using-docker-as-the-container-runtime

By the way, I tried to pretend that it's a docker-desktop cluster by renaming the context, and it works flawlessly!

@leoluk
Copy link
Author

leoluk commented Aug 3, 2020

Reproduction instructions:

curl -sfL https://get.k3s.io | sh -s - --docker
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
sed -i 's/  name: default/  name: docker-desktop/g' $KUBECONFIG
sed -i 's/current-context: default/current-context: docker-desktop/g' $KUBECONFIG
sed -i 's/cluster: default/cluster: docker-desktop/g' $KUBECONFIG

tilt up

@nicks
Copy link
Member

nicks commented Aug 3, 2020

Hmmm....those instructions are about setting up K3s on a Node.

I'm not sure this is a safe or recommended way to run K3s for local development, but let me point some people in the K3s project at this issue and see what they say.

We usually see people using K3d to run K3s for local development, see our instructions here: https://github.com/tilt-dev/k3d-local-registry

@leoluk
Copy link
Author

leoluk commented Aug 3, 2020

Perhaps the real feature request here is being able to disable image pushing?

Skaffold has a local override: https://skaffold.dev/docs/environment/local-cluster/#manual-override

@nicks nicks added enhancement New feature or request and removed bug Something isn't working labels Aug 3, 2020
@nicks
Copy link
Member

nicks commented Aug 4, 2020

Ya, here's an overview of some of the problems with this approach:
https://minikube.sigs.k8s.io/docs/drivers/none/

You can disable image pushing today with Tilt with custom_build(disable_push) https://docs.tilt.dev/api.html#api.custom_build

I don't think we want to support a disable-push option as easy as Skaffold's. I worry that adding options for fundamentally unsafe modes leads to checkboxes that kill (see: https://limi.net/checkboxes), i.e., situations where it's way too enable a bunch of modes that interact in weird and broken ways (and in fact we've had complaints about this already w/r/t custom_build's disable_push)

@leoluk
Copy link
Author

leoluk commented Aug 4, 2020

Ya, here's an overview of some of the problems with this approach:
https://minikube.sigs.k8s.io/docs/drivers/none/

It's not ideal for local development because it pollutes the local Docker daemon, but there are valid use cases for it (like a CI environment, which is what we use it for). It has the big advantage of being very fast since builds are local, and it shaves precious seconds off the build time that would be spent pushing to and pulling from a registry.

You can disable image pushing today with Tilt with custom_build(disable_push) https://docs.tilt.dev/api.html#api.custom_build

We're building an open source project that comes with a Tiltfile. We don't want any environment-specific config in it such that it works out of the box with whatever local cluster other contributors are using. It would also mean not having any of the conveniences of using docker_build (dependency resolution, etc).

(same about allow_k8s_contexts, where to put config specific to my local environment?)

I don't think we want to support a disable-push option as easy as Skaffold's. I worry that adding options for fundamentally unsafe modes leads to checkboxes that kill (see: https://limi.net/checkboxes), i.e., situations where it's way too enable a bunch of modes that interact in weird and broken ways (and in fact we've had complaints about this already w/r/t custom_build's disable_push)

In terms of security, it makes no difference - you can easily break out of kind or k3d, they're no security boundaries.

Agreed about the dangers of excess configurability, but doesn't this particular checkbox - certain clusters not requiring pushes - already exist? Why only support it for minikube and docker-desktop? What about custom minikube deployments?

@nicks
Copy link
Member

nicks commented Aug 4, 2020

hmmm...the minikube doc above says "Most users of this driver should consider the newer Docker driver, as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only.", and lists issues of "Decreased security", "Decreased reliability", and "Data loss".

But you say: "In terms of security, it makes no difference" - I'm having trouble reconciling this with the minikube documentation. Is there a document you're basing that on, or is this your own independent security analysis?

But if you understand the security risk, I think there are two paths forward:

So I could imagine Tilt supporting a config map like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tilt-cluster-config
  namespace: kube-public
data:
  useLocalDockerDaemon: true

@leoluk
Copy link
Author

leoluk commented Aug 4, 2020

hmmm...the minikube doc above says "Most users of this driver should consider the newer Docker driver, as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only.", and lists issues of "Decreased security", "Decreased reliability", and "Data loss".

minikube in --driver=none mode essentially installs a single-node k8s cluster on your host using kubeadm in a very non-hermetic fashion. When I tried it in a clean VM, I could get it to work with tilt using the same docker-desktop trick, but it overwrote my local .kube/config and left behind dozens of files after running minikube delete, including a defunct kubelet.service in a place where it clearly doesn't belong (/lib).

Can definitely see why they declare it a dangerous feature :-)

k3s, on the other hand, is explicitly designed to run directly on a host and it takes great care to namespace all of its components. Without --docker, it even brings its own containerd that peacefully co-exists with whatever else is running there, and it leaves no traces after running k3s-uninstall.sh. With --docker, it's less hermetic and won't clean up old containers/images but still doesn't break anything or overwrite existing configs.

In a different project with Bazel, we just use plain k3s for developing in a VM without any Docker daemon running. Images are loaded straight into containerd and the deployments are updated with the new hashes:

https://github.com/leoluk/NetMeta/blob/6fe1e53651ed32d3582eca8ce80ffd4c22e6a40a/scripts/build_containers.sh#L14

But you say: "In terms of security, it makes no difference" - I'm having trouble reconciling this with the minikube documentation. Is there a document you're basing that on, or is this your own independent security analysis?

Last time I checked, "Kubernetes in Docker" tools like k3d, kind and minikube in Docker mode, have to run the kubelet in a privileged container with root privileges. Access to the k8s API means root privileges on the host.

There's some work going on to get k8s running in user namespaces but it's still a work in progress: rootless-containers/usernetes#42

The only "safe" runtime is normal minikube with VMs.

But if you understand the security risk, I think there are two paths forward:

  • in the short term, you can rename the kubectl context to docker-for-desktop, and trick tilt, right?

Yes, that works as expected.

Sounds perfect! (and thanks for responding so quickly!)

nicks added a commit that referenced this issue May 27, 2021
…diately"

We used to treat this as a property of the cluster type + the container runtime.

But this made it impossible to support things clusters
that sometimes use your docker runtime, and sometimes do not.

For examples, see:
- #4587
- #3654
- #1729
- #4544

This changes the data model so that "image builds show up in the cluster"
is a property of the Docker client, not of the cluster.

This should be much more flexible and correct, and help us support multiple
clusters.

Fixes #4544
nicks added a commit that referenced this issue May 27, 2021
…diately"

We used to treat this as a property of the cluster type + the container runtime.

But this made it impossible to support clusters that sometimes use your docker
runtime, and sometimes do not.

For examples, see:
- #4587
- #3654
- #1729
- #4544

This changes the data model so that "image builds show up in the cluster"
is a property of the Docker client, not of the cluster.

This should be much more flexible and correct, and help us support multiple
clusters.

Fixes #4544
nicks added a commit that referenced this issue Jun 7, 2021
…diately" (#4598)

* docker: change how we model "image builds show up in the cluster immediately"

We used to treat this as a property of the cluster type + the container runtime.

But this made it impossible to support clusters that sometimes use your docker
runtime, and sometimes do not.

For examples, see:
- #4587
- #3654
- #1729
- #4544

This changes the data model so that "image builds show up in the cluster"
is a property of the Docker client, not of the cluster.

This should be much more flexible and correct, and help us support multiple
clusters.

Fixes #4544

* Update internal/docker/env.go

Co-authored-by: Maia McCormick <maia@windmill.engineering>

Co-authored-by: Maia McCormick <maia@windmill.engineering>
landism pushed a commit that referenced this issue Jun 9, 2021
…diately" (#4598)

* docker: change how we model "image builds show up in the cluster immediately"

We used to treat this as a property of the cluster type + the container runtime.

But this made it impossible to support clusters that sometimes use your docker
runtime, and sometimes do not.

For examples, see:
- #4587
- #3654
- #1729
- #4544

This changes the data model so that "image builds show up in the cluster"
is a property of the Docker client, not of the cluster.

This should be much more flexible and correct, and help us support multiple
clusters.

Fixes #4544

* Update internal/docker/env.go

Co-authored-by: Maia McCormick <maia@windmill.engineering>

Co-authored-by: Maia McCormick <maia@windmill.engineering>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants