-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate kind
replacement for e2e
#1391
Comments
As an upstream kubernetes project you should really consider testing against upcoming Kubernetes before it releases, by doing EDIT: 1.30 is out anyhow. |
Thanks @BenTheElder, that's a great suggestion and I think it's exactly what we've been looking for. It has always felt a bit weird that we were dependent on waiting until after the kind release to test and publish our next release. I didn't know about that option |
cc: @pravarag since you wanted to look into this. I think we could do something like:
|
@a7i I'd say our master branch could always be building from source, with our tag branches using the released version The downside to that is we run the risk of getting master blocked on bugs from kind though |
Wouldn't that still get us blocked on kind image being released? Unless that's intentional? |
It would block new PRs to the tagged branch until that image was available, so maybe the release branches could use the conditional switch like you're suggesting |
For K/K we're fetching kind from HEAD to stay compatible with any kubernetes breaking changes, but we're also running equivalent CI jobs, it's possible we'd have a breaking change that you'd have found in the release notes. That said, we're avoiding those as much as possible, and when we're planning one they've generally been pre-announced (like the containerd 2.0 style registry config) in previous release notes similar to Kubernetes style deprecations. This isn't always possible when we have to react to e.g. the runc misc cgroup changes but generally speaking upgrading to those changes should be desired in a typical CI environment. We're discussing continuous image builds in the future, right now we build releases with https://github.com/kubernetes-sigs/kind/blob/0a7403e49c529d22cacd3a3f3606b9d8a5c16ae7/hack/release/build/push-node.sh which amongst other things is making the images smaller by compiling out dockershim and cloud providers ... which won't be necessary vs standard builds in 1.31+ |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@a7i apologies on letting this issue slip for this long. I've started investigating on it again and will be sharing my further inputs/doubts on it soon. Thanks! |
Thanks @pravarag , no worries I totally understand how open-source goes and we certainly appreciate your contribution. the images are published quickly now too so it's not as urgent: https://hub.docker.com/r/kindest/node/tags |
FYI you can also cheaply build images from kubernetes release binaries (recommended usage is only for 1.31+ but it works for older releases too, just larger images) multi-arch support isn't there yet, but you could combine them with docker manifest if you require that. For just quickly building an image at a custom version to use in github actions, see the release notes at https://github.com/kubernetes-sigs/kind/releases/tag/v0.24.0 / https://kind.sigs.k8s.io/docs/user/quick-start/#building-images This is much faster and doesn't require compiling, only downloading a build. |
Is your feature request related to a problem? Please describe.
Descheduler release are typically blocked by waiting for a kind node image.
Describe the solution you'd like
Describe alternatives you've considered
Stay behind on releases
Additional context
kubernetes-sigs/kind#3589
The text was updated successfully, but these errors were encountered: