-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compatibility with containerd? #1265
Comments
Thanks @vikas027 - I agree that this is something that we need to improve. We're taking a look at improving our support for different containers and runtimes now and this is something that we'll take into account. Appreciate the feedback, we hope to have some news here soon. |
Any update about this? Images with Docker in docker is a segurity risk and today docker is deprecated in Cloud environment with kubernetes, so docker.sock won't found any more. Here an issue related with the alternative (deploy docker cli without docker daemon instance).. conclusion it docker always need a daemon started with its docker.sock, so this confirm there is not alternative to use docker in docker to pull images in nodes migrated with new dockershim like contairnerd |
Dockershim has been deprecated in Kubernetes on 1.23 and removed for 1.24. This follows the guidance set by the Kubernetes team. This absolutely needs to be prioritized. https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/ |
@TingluoHuang To underscore the urgency here, Kubernetes' version policy is to support the three most recent minor releases, which are currently 1.24-1.26. In other words, the docker runtime is no longer available in any kubernetes version with upstream support. |
Chiming in as an affected consumer, with EKS dropping K8S 1.22 in June. I'm reading what may be some misunderstandings of the capabilities of containerd and the desire to use docker in runner pods. If runner pods were able to call docker and launch containers before, they will still be able to after upgrading to the latest K8S version. At least they will if you're using the summerwind-dind container or built your own and borrowed the pieces that install dockerd and supervisord. The ARC dind containers are already launched with the privileged security context, so dockerd will still work so long as the node has docker engine installed, and the runnerdeployment mounts the socket and /var/lib/docker into the runner.
The difference in K8S without dockershim is we can no longer share the imagefs directory or runtime socket with the runner pods. While this does not totally affect the ability to process workflow jobs, and even workflows building or using containers with docker will continue to work, without containerd support the affect is image pulls in new and ephemeral runners will always require a full image fetch from the registry. It has been a great help to mount the docker socket and imagefs into runner pods - we have processed 2,325,917 (accurate as of now) workflow runs with ARC, and saved at least 2/3 of that times our 3GB image size in image pulls and the associated time. It would be fantastic to be able to continue that with containerd. Additionally, imagefs storage sizing will need to be reconsidered to facilitate the additional copies of container images stored locally in each pod's overlayfs on worker nodes. I've played with just continuing to install docker in our worker node AMI, but Kubelet doesn't try to manage anything docker related under disk pressure, so the node just fills up and dies. Looking forward to updates here. |
Any updates on this? |
Any chances to see this feature soon? |
Would love to see this feature as well. We have reverted back to EC2 based runners for now for container jobs |
I use Bottlerocket AMI on EKS clusters which use containerd that does not uses docker or docker socket.
Custom actions fails with errors like these. As a workaround, is there a way I can use a pre-pulled docker image instead of GitHub action trying to build an image on the fly?
The text was updated successfully, but these errors were encountered: