-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move dockerhub kindest/node to a non rate limited registry #1895
Comments
I think we may even want to primarily move off of it going forward, TBD. We've been discussing dockerhub mitigations at the SIG Testing level but haven't managed to move on much yet. This is a higher priority for me though*, other subprojects we don't own can themselves move away from dockerhub and if any projects are using it Kubernetes doesn't officially support them in that and they would have received notice from dockerhub themselves. * as sig-testing we also need to mitigate dockerhub usage in e2e.test in kubernetes itself, and will still try to provide mitigations and guidance for CI / subprojects .. |
Kubernetes provides k8s.gcr.io to subprojects but it's a bit problematic for us:
the latter may be a good plan anyhow but the former is ... problematic. the promotion system also makes image pushes a bit onerous, even if you have automated pushes you must manually craft a yaml PR to request that an image be promoted from the staging registry to production. https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io quay.io and the github package registry are obvious alternatives to look into. |
kubernetes/test-infra#19477 (comment) this may be the simple route 🙃 |
Mirroring and/or switching registries going forward is still on the table. We can't do a mirror of existing imagds to k8s.gcr.io due to the promotion process and one tag one image though. I've applied to the docker OSS program. |
I totally misread this and then went OOO. I'm not clear if we should move forward on this approach, particularly these constraints:
Given kind does not currently even control a blog or press releases (versus the kubernetes project at large), the last two points here we should already be more or less compliant on. On the plus side: I've not heard any additional user concern about this yet, which might be because a typical workflow does not involve pulling node / kind images often outside of CI, and CI workflows are more generally impacted than just by our images. |
cc @tao12345666333 I'm a bit concerned about breaking our chinese user base while we're at it (I think we have a number of users that depend on dockerhub being available), though perhaps the new github package registry could work 🤔 |
@BenTheElder The new github package registry does not work perfectly and is not available in some regions. If needed, I can provide a container registry mirror (in China). We only need to introduce it in the document. |
I'm not sure how we could provide one, exactly. |
I forgot to mention that I was scheduled to meet with Docker this week to discuss KIND and their program. If that doesn't work out, I will reach out about mirroring images from k8s.gcr.io before we move there with the rest of the subprojects @tao12345666333 🙏 |
@BenTheElder are there terms and conditions that we as a project need to adhere to? |
ok, good news! |
So far upon revisiting I'm not concerned by anything yet, and we were quite clear that this is only the kind project, not Kubernetes, not SIG testing, not any of our employers, etc. But we shall see as I continue along with the process 😅 |
@BenTheElder Ack. if any paperwork shows up, please holler. |
Just a lowly KinD desktop user here. I seem to be getting rate-limited when downloading images. Got plenty of bandwidth, but since I'm doing a lot of work with K8s Operators and Helm that download a lot of images, KinD is adding to Docker, Inc.'s quota decrement for my network's NAT'd IP. Cheers, and happy new year! :) |
Well, this may be more urgent #3124
Kind should only pull once per node image itself, and generally we expect that to be relatively rare, once you've used a node image version it is pulled and you should not need to re-pull it unless you delete it. A typical kind cluster involves a single image that we host, all other images needed are either packed into it or coming from your workloads. If you mean images pulled to run within the nodes, you can use In the meantime hosting your own mirrors is also an option, or persisting a copy of the images as a tarball and loading. |
Currently thinking between:
I think the former is more obvious, but it will make it harder to list images available with skewed kind versions, which mostly works OK today but isn't guaranteed. The second is more complex to parse, e.g. if we have pre-release versions like See #3124 (comment) The latter would also make it easier for kind to do clever client side tag listing and comparison with constraints on valid images I think, but the former is super simple to implement for everyone Note that there's a portable API to list tags (after the |
Although this may come as a surprise, I am able to offer a mirror of the China region if necessary. |
I prefer the second form,
|
Thanks @tao12345666333. If the default is no longer on dockerhub and users currently will have to override I suppose if this proves problematic we can introduce some env to tell kind to basically rewrite registry.k8s.io/kind => mirror independent of config / flags.
In both forms the tentative plan is to include both kind version and kubernetes version somehow, because we have to stop pushing mutable tags to use registry.k8s.io anyhow so image name+tag must be fully unique. It may also help clarify version support. I think the second form might be a bit more confusing and complex to parse. The point about other registries and nesting is interesting, other than dockerhub do you have any examples? Ostensibly in the OCI spec multi-level nesting is valid but I know dockerhub doesn't seem to support it. |
The original ask was mirroring, but I think given the circumstances the plan is to ensure the primary registry is without issue. |
The following are from the top two public cloud vendors in China, but none of their image registry services support nesting. I have the impression that some other cloud vendors have similar limitations. |
Thanks. There's also the possibility of a variant of the first form like: Which I think is maybe slightly less clear to read than the first form, more similar to the second form, but has the same effects otherwise as the first form without an additional level of nesting. It has the same trade-off in being somewhat clearer / easier for users to parse if they wish, but having tag list calls be per-kind version though instead of being able to filter more flexibly from just listing node images. OTOH: After adding many many tags it may be faster to not have to list all tags ever. There are not sophisticated parameters for the tag list call https://github.com/opencontainers/distribution-spec/blob/main/spec.md#content-discovery |
Yes.
|
I think we have one shot to introduce a better scheme here when we migrate and we better make things clearer and support things like #3053, otherwise I wouldn't bikeshed this particular detail so much.
I wonder if we have some prior art in something like kOps or cluster-API for conventions naming / tagging dual $subproject_version, $kubernetes_version images. Edit: Very scientifically ™️ asking Twitter for input 🤷♂️ https://twitter.com/BenTheElder/status/1635877721644630021 |
We operate a CNCF Harbor-based container registry as a service that has many benefits over most of the other registries out there. There are also features regarding containerized image distribution, that might be valuable too as well. Data Ownership is something very valuable that should not be underestimated. Imaging how many books, tutorials will now become outdated because they are referring to images that don't exist anymore. |
I'm manager a proxy of Note: Only working for Chain user, this is the mission of |
Docker has modified the policy. If the Docker-Sponsored Open Source program does not go well, we can also migrate to a personal account without changing the name. |
We're in the OSS program now #3124 (comment) One of the benefits is lifted rate limit. That's good for one year and then needs renewal. We will still be considering next steps for long term improving images but first up is making some fixes to the dockerhub listings related to the OSS program. |
I'm also in contact with my co steering members and SIG K8s infra about this (and this time heavily involved in both ...), the new OSS program has very light requirements and there've been no objections unlike the initial program that Kubernetes (not my call) rejected and left us feeling like we couldn't clearly agree to either. Despite all the complaints I'm seeing elsewhere, I would actually say this updated program seems very good. I am however wary of future changes and already working on multi-vendor image hosting for the Kubernetes project so I think we still need to strongly consider our long term options. |
The original issue (rate limiting) should be resolved now by way of participation in the revamped Docker Open Source program. We'll still look at if we should migrate to registry.k8s.io or elsewhere in a follow-up, but I think we can close this one for now. |
What would you like to be added:
A mirror of kindest/node to GCR or another registry
Why is this needed:
On Nov 1, dockerhub will introduce rate limiting on pulls. I am fairly sure this will break our CI, since we pull a lot of kind images.
This is not at all a blocker for us, as we can mirror them into our own registries without issue, but may be useful for the broader community.
The text was updated successfully, but these errors were encountered: