Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow subprojects to push to k8s-artifacts-prod/ top-level (root)? #716

Closed
listx opened this issue Apr 3, 2020 · 27 comments · Fixed by #856
Closed

allow subprojects to push to k8s-artifacts-prod/ top-level (root)? #716

listx opened this issue Apr 3, 2020 · 27 comments · Fixed by #856
Assignees
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects sig/release Categorizes an issue or PR as relevant to SIG Release.

Comments

@listx
Copy link
Contributor

listx commented Apr 3, 2020

Apparently, gcr.io/google-containers/debian-base is a base image used by the community. This path now resides (in the new prod post VDF) at us.gcr.io/k8s-artifacts-prod/debian-base.

For k8s-artifacts-prod, we already have a "legacy" manifest that is responsible for pushing up into the top-level folder here.

I'm opening this issue to ask whether we care to support promoting new images into the toplevel folder (no prefix). That is, with the way things are set up with promoter manifests today, we expect them to push into {asia,eu,us}.gcr.io/k8s-artifacts-prod/<PREFIX>/foo-image. So in this example, if we did it the "right" way, we would push up a new debian-base to {asia,eu,us}.gcr.io/k8s-artifacts-prod/base-images/debian-base (I just made up base-images but the point is that there would be some prefix name here).

Is it desirable instead to be able to push to {asia,eu,us}.gcr.io/k8s-artifacts-prod/debian-base instead? Given that this is a base image used in Makefiles and builds only, I would guess "no", but I'd like to hear from folks who have built/published this image in the past.

If we go with having a prefix (like we do for all the other subprojects), we need to decide on a name and set that up (I can create a PR, once we decide on a name).

/cc @BenTheElder @thockin @prameshj @jingyih

@justaugustus
Copy link
Member

/assign
/sig release
/area release-eng

@k8s-ci-robot
Copy link
Contributor

@justaugustus: The label(s) area/release-eng cannot be applied, because the repository doesn't have them

In response to this:

/assign
/sig release
/area release-eng

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Apr 3, 2020
@destijl
Copy link
Member

destijl commented Apr 6, 2020

@tallclair who has done a bunch of the maintenance of debian-base.

Do any of these options mean people need to change their FROM lines? e.g.

FROM k8s.gcr.io/debian-base-amd64:0.3

I assume yes, but I don't know the details of how the aliasing works.

@stp-ip
Copy link
Member

stp-ip commented Apr 6, 2020

As long as they are using the k8s.gcr.io domain nothing needs to be changed. All legacy urls and images will still work. It's specifically about future images. Those would need to change, if root level images are no longer allowed.
To be fair the version would have to change anyway, so changing the prefix as well should be ok.

Overall in my personal opinion I think we should enforce a prefix.

@destijl
Copy link
Member

destijl commented Apr 6, 2020

OK so we're saying all future images would be:

FROM k8s.gcr.io/base-images/debian-base-amd64:0.3

That sounds like a good idea to me. We'll have to deal with some short-term confusion as updates stop appearing in the root (i.e. make an announcement).

@stp-ip
Copy link
Member

stp-ip commented Apr 6, 2020

Yes that would be one suggestion. Depending on what prefix the community agrees on.
Each prefix usually corresponds to one sub project and team having control.

Therefore it could also be:

FROM k8s.gcr.io/debian-base/debian-base-amd64:0.3

There were various announcements for the direct container side of things, which might be able to be reused. @listx has them somewhere I'm sure.
Any specific "builder" or base image list to announce or people to ping for reach?

@listx
Copy link
Contributor Author

listx commented Apr 6, 2020

There were various announcements for the direct container side of things, which might be able to be reused. @listx has them somewhere I'm sure.

The only community-wide announcements that I've sent out are at https://groups.google.com/d/msg/kubernetes-sig-release/ew-k9PEBckQ/T7dFepHdCAAJ

Any specific "builder" or base image list to announce or people to ping for reach?

@BenTheElder @justaugustus ?

@BenTheElder
Copy link
Member

Any specific "builder" or base image list to announce or people to ping for reach?

no, we don't think we've ever actually advertised these for re-use, so at best it would be:

  • the kubernetes release notes
  • kubernetes-dev mailing list

I consume the debian-iptables image in kind but I have no real preference on the exact naming of the image.

@thockin
Copy link
Member

thockin commented Apr 6, 2020

I agree that moving to a subdir is valuable and not worth bypassing.

@immutableT
Copy link

cc @immutableT

@tallclair
Copy link
Member

I don't have any issues with moving the base images to a subdir, but I will mention that we're thinking about getting rid of the base images in kubernetes/kubernetes#88603, so it might not be worth investing too much effort in this (we will need to continue to maintain the base images for past releases though, unless we decide to cherry-pick the changes to rebase on debian:{buster,stretch}-slim)

@BenTheElder
Copy link
Member

BenTheElder commented Apr 7, 2020 via email

@listx
Copy link
Contributor Author

listx commented Apr 9, 2020

I don't have any issues with moving the base images to a subdir, but I will mention that we're thinking about getting rid of the base images in kubernetes/kubernetes#88603, so it might not be worth investing too much effort in this (we will need to continue to maintain the base images for past releases though, unless we decide to cherry-pick the changes to rebase on debian:{buster,stretch}-slim)

I think for the short-term we still need to have an official place for debian-base (and others too, probably also debian-iptables that Ben mentioned).

We are still ironing out details as to when the VDF will happen (hopefully this month, but don't quote me on it), but in the meantime can we all agree that we do want a base-images subproject?

@destijl
Copy link
Member

destijl commented Apr 9, 2020

Yes please, let's move forward. We need to quickly unblock our ability to patch this in the new world.

@listx
Copy link
Contributor Author

listx commented Apr 10, 2020

@immutableT Please follow the instructions https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io#creating-staging-repos to create one for base-images

Closing this issue, as no one has opposed thockin's observation that we should not bypass moving to subdirs.

@listx listx closed this as completed Apr 10, 2020
@justaugustus
Copy link
Member

@immutableT @listx -- I'm already working on this with the intent that management of the base images would transition to the Release Engineering subproject.

Who might need to be added to this staging project?
Or are you all happy for Release Engineering to take this over in its' entirety?


As to the original query:

I'm opening this issue to ask whether we care to support promoting new images into the toplevel folder (no prefix).

I think the only images that should be allowed at the root level are "first-class images", namely the ones that are artifacts of the official Kubernetes release process:

  • cloud-controller-manager
  • conformance (will likely be moved to another staging project)
  • hyperkube (to be deprecated in a future release)
  • kube-apiserver
  • kube-controller-manager
  • kube-proxy
  • kube-scheduler

ref: https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/manifests/k8s-staging-kubernetes/promoter-manifest.yaml

@listx
Copy link
Contributor Author

listx commented Apr 10, 2020

@immutableT @listx -- I'm already working on this with the intent that management of the base images would transition to the Release Engineering subproject.

Who might need to be added to this staging project?
Or are you all happy for Release Engineering to take this over in its' entirety?

I'll leave that up to you and @immutableT, @tallclair

As to the original query:

I'm opening this issue to ask whether we care to support promoting new images into the toplevel folder (no prefix).

I think the only images that should be allowed at the root level are "first-class images", namely the ones that are artifacts of the official Kubernetes release process:

  • cloud-controller-manager
  • conformance (will likely be moved to another staging project)
  • hyperkube (to be deprecated in a future release)
  • kube-apiserver
  • kube-controller-manager
  • kube-proxy
  • kube-scheduler

ref: https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/manifests/k8s-staging-kubernetes/promoter-manifest.yaml

Sorry I forgot about this, but it makes sense to me and I agree with how you've set things up here. I assume we already had this discussion in the PR for it #624, but I can't recall at the moment.

@listx listx reopened this Apr 10, 2020
@spiffxp
Copy link
Member

spiffxp commented Apr 15, 2020

/area artifacts
/assign @thockin
for input

I can't find it in the meeting notes, but I feel like when we discussed at a wg-k8s-infra meeting we wanted nothing new in root. I feel like at a minimum we should start a deprecation clock ticking on the prefixless images, and have these images also placed in a prefix dir.

@k8s-ci-robot k8s-ci-robot added the area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects label Apr 15, 2020
@thockin
Copy link
Member

thockin commented Apr 15, 2020

I see two points, tell me if I am missing anything:

  1. Should we allow new pushes of (for example) kube-apiserver into the root dir of k8s.gcr.io? Historically that is where they all are. Do we think it matters if that is not true in the future? No matter what, we are not removing the old images from their current locations.

  2. If we change to a subdir for "core" releases, should we ALSO copy the old images there?

In other words:

a) Leave old releases in the root (e.g. k8s.gcr.io/kube-apiserver:v1.18.0) and put new releases in the root, too

b). Leave old releases in the root. Make all new releases go into a subdir, e.g. k8s.gcr.io/kubernetes/kube-apiserver:v1.20.0.

c) Leave old releases in the root are AND ALSO copy them to a subdir. Make all new releases go only into the subdir.

I don't have a strong feeling - having a SINGLE staging that writes to root does not seem egregious, but I have a slight lean towards (c). Subdirs for everyone, images in the root are legacy.

ISTR @justaugustus had a slight lean towards (a), but I do not recall the details of why.

I do not see any reason to do (b).

@listx
Copy link
Contributor Author

listx commented Apr 15, 2020

Subdirs for everyone, images in the root are legacy.

+1. But I also think a) can evolve (what @spiffxp referred to as a "deprecation clock") into c).

Currently we are already doing a). This is because the legacy backfill manifest has to be updated to reflect new images going into google-containers at the moment, leading up to the domain flip #2.

@BenTheElder
Copy link
Member

Minor x-ref regarding top level images and mirroring containerd/containerd#3756

@spiffxp
Copy link
Member

spiffxp commented Apr 23, 2020

Much as I would like to say no root publishing starting now, I think it's unreasonable to force users to change image names to upgrade to 1.19. Ensuring all old releases are in a prefix (eg: k8s.gcr.io/kubernetes/kube-apiserver) and syncing between prefix and root during a deprecation window is what I had in mind.

If at all possible I'd prefer we publish to the prefix and sync back to root, instead of vice-versa, so we have nothing to do when the deprecation window expires.

The two images from #716 (comment) that I would exclude from this are:

  • conformance: it's not required to deploy/upgrade kubernetes, stop publishing to root now
  • hyperkube: 1.17 was the last image published, it was removed from 1.18, there is no more publishing, don't bother syncing

You could argue we should make the deprecation window 1 year; I'd go no shorter than whatever deprecation window we used for hyperkube.

@justaugustus
Copy link
Member

Much as I would like to say no root publishing starting now, I think it's unreasonable to force users to change image names to upgrade to 1.19. Ensuring all old releases are in a prefix (eg: k8s.gcr.io/kubernetes/kube-apiserver) and syncing between prefix and root during a deprecation window is what I had in mind.

Agreed.

If at all possible I'd prefer we publish to the prefix and sync back to root, instead of vice-versa, so we have nothing to do when the deprecation window expires.

The two images from #716 (comment) that I would exclude from this are:

* conformance: it's not required to deploy/upgrade kubernetes, stop publishing to root now

* hyperkube: 1.17 was the last image published, it was removed from 1.18, there is no more publishing, don't bother syncing

You could argue we should make the deprecation window 1 year; I'd go no shorter than whatever deprecation window we used for hyperkube.

Yep, that sounds like the correct path.
I've already set the k8s-staging-kubernetes staging project to promote to the kubernetes prefix here to support pause image building.

The next thing we need figure out is what the sync process will be between the kubernetes prefix and root.
Release Engineering will need visibility into this and ideally, permissions to rerun that specific job for emergency situations during releases.

To answer the original query, we've enabled base image building in the k8s-staging-build-image project. Tracking for that is here: kubernetes/kubernetes#90698

@thockin
Copy link
Member

thockin commented May 8, 2020 via email

@justaugustus
Copy link
Member

Is it not sufficient to have 2 entries in the manifest? It doesn't let you include some and exclude others, but it obviates the need for a new process with new testing, new security audit, etc.

@thockin -- I didn't know you could do that. If we can, I'm happy to make that change and close the loop on this.

@listx -- Are there any issues you see arising with Tim's approach?

@listx
Copy link
Contributor Author

listx commented May 8, 2020

Is it not sufficient to have 2 entries in the manifest? It doesn't let you include some and exclude others, but it obviates the need for a new process with new testing, new security audit, etc.

@thockin -- I didn't know you could do that. If we can, I'm happy to make that change and close the loop on this.

@listx -- Are there any issues you see arising with Tim's approach?

If it just means promoting the same image into multiple places, the promoter already supports this (I just checked doing a dry run). In our situation, we would just add another pair of images.yaml/promoter-manifest.yaml files for the subproject we would want to replicate into the root (without subproject prefix). Then the only difference would be that promoter-manifest.yaml would have the destination registries without the subproject prefix. For added simplicity you could just symlink one images.yaml into the other to make sure that they always represent the same set of images.

@justaugustus
Copy link
Member

Opened a PR here to enable root-level promotion: #856

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants