Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Umbrella Issue] Setup a GCR Repository for projects to use #158

Closed
dims opened this issue Dec 5, 2018 · 39 comments
Closed

[Umbrella Issue] Setup a GCR Repository for projects to use #158

dims opened this issue Dec 5, 2018 · 39 comments
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@dims
Copy link
Member

dims commented Dec 5, 2018

Split from #153 (see that for some context)

cc @javier-b-perez @mkumatag @listx @thockin

@dims
Copy link
Member Author

dims commented Dec 5, 2018

Related to #157

@javier-b-perez
Copy link
Contributor

Do we have an agreement in the structure of the storage?

@dims
Copy link
Member Author

dims commented Dec 6, 2018

@javier-b-perez we need to come up with a proposal

@javier-b-perez
Copy link
Contributor

You mean a proposal for the structure, like in the doc?

Option 1)
We have a single staging area

staging-k8s.gcr.io/<TYPE>/[<PROJECT>/]<IMAGE_NAME>:<TAG>

TYPE

  • kubernetes - core kubernetes images like pause, apiserver etc.
  • kubernetes-sigs - Kubernetes SIG-related work
  • tests - used for tests
  • examples - example images used in the documentation

PROJECT: (optional) can be e.g. dashboard.
IMAGE_NAME: name of the container image

TAG: structure <VERSION>-<OS>-<ARCH>

  • <TAG>: image version
  • <OS>: the same as GOOS, e.g. linux or windows
  • <ARCH>: the same as GOARCH, e.g. amd64 or ppc64le

Example
k8s.gcr.io/kubernetes/pause

  • k8s.gcr.io/kubernetes/pause:3.1-linux-amd64
  • k8s.gcr.io/kubernetes/pause:3.1-linux-arm
  • k8s.gcr.io/kubernetes/pause:3.1-linux-ppc64le
  • k8s.gcr.io/kubernetes/pause:3.1-linux-s390x

Option 2)
Tim mentioned an option to have per-project staging areas

gcr.io/<PROJECT>/[<TYPE>/]<IMAGE_NAME>:<TAG>

Example
k8s.gcr.io/kubernetes/pause

  • gcr.io/k8s-kubernetes-staging/pause:3.1-linux-amd64
    ...

Then we promote images to the official registry k8s.gcr.io/<TYPE>/[<PROJECT>/]<IMAGE_NAME>:<TAG>

@marquiz
Copy link
Contributor

marquiz commented Dec 12, 2018

Per-project staging (option 2) sounds safer to me. However, the URI scheme in option 1 looks better without the -staging appended to the project name.

I'd be very happy to see this issue solved as node-feature-discovery would need a new home for its images (kubernetes-sigs/node-feature-discovery#177) 😉

@BenTheElder
Copy link
Member

cc

@pohly
Copy link
Contributor

pohly commented Feb 4, 2019

kubernetes-csi also needs an area to publish its sidecar container images.

@pohly
Copy link
Contributor

pohly commented Feb 4, 2019

gcr.io/kubernetes-csi/<image>:<tag> sounds good to me.

@marquiz
Copy link
Contributor

marquiz commented Feb 5, 2019

Any progress on this issue?

@dims
Copy link
Member Author

dims commented Feb 9, 2019

ON Friday Feb 8th Tim and i ended up setting up the following for GCR.

As a first step, we are setting up staging repositories.

  • gcr.io/k8s-staging-csi
  • gcr.io/k8s-staging-cluster-api
  • gcr.io/k8s-staging-coredns

Folks in the following google groups will be able to push to the staging repository:

The following google group has the admin privileges over all the GCR repos.

Notes:

  • Only release artifacts should be uploaded here
  • artifacts will be cleaned up periodically (older than 2 weeks?) to prevent the use of these repos beyond staging
  • We will have additional workflow for promotion from here to the main k8s.gcr.io repo
  • We will be adding sig leads to the google groups sometime next week to kick the tires

cc @thockin @hh

PS: next on the list is cloud-provider(s)

@dims
Copy link
Member Author

dims commented Feb 11, 2019

related to #186

@thockin
Copy link
Member

thockin commented Feb 11, 2019

Note that my PR does not add an age-out for these repos. If we're interested in that, we can add it.

@dims
Copy link
Member Author

dims commented Feb 12, 2019

@thockin i think we should age-out all the artifacts in the staging repos to prevent them from being used beyond testing. I'd say 2 weeks.

@jeefy
Copy link
Member

jeefy commented Feb 12, 2019

Per @dims (Thanks!)

Piggy-backing on this thread to request a k8s.gcr.io repo for https://github.com/kubernetes-sigs/dashboard-metrics-scraper when things get sorted out. 😄

@pohly
Copy link
Contributor

pohly commented Feb 20, 2019

gcr.io/k8s-staging-csi

This is meant for the images published by Kubernetes-CSI, right? @BenTheElder pointed out that images built locally can be side-loaded into kind, so for CI testing purposes alone it might not be necessary to push images to this staging area.

We could publish release images in this staging area, but then the question becomes where the final destination will be (k8s.gcr.io/csi?) and how quickly they will get copied there.

Folks in the following google groups will be able to push to the staging repository:

How will that work from inside a Prow job?

@neolit123
Copy link
Member

TAG: structure <VERSION>-<OS>-<ARCH>

  • <TAG>: image version
  • <OS>: the same as GOOS, e.g. linux or windows
  • <ARCH>: the same as GOARCH, e.g. amd64 or ppc64le

just wanted to raise one point about GOOS tags and Windows.

originally we tough that this can apply cleanly so that we can have one manifest list (say for the pause container) that runs on all OSes, but on Windows there is a bit of a problem with sizes.

workloads on Windows can be based on a Windows Server XX and Windows Nanoserver XX.
according to my research, the nano server one is at least 100MB, while the regular server one is 2GB (both without hacks).

depending on what workloads the users run on the worker nodes they might decide to go for the nanoserver (to have a smaller download), which we don't handle with the above tags.

i started to think that we might have to do paths instead (and if possible):
gcr.io/.../windows-nano/SOMEIMAGE:VERSION-ARCH
gcr.io/.../windows/SOMEIMAGE:VERSION-ARCH

@BenTheElder
Copy link
Member

How will that work from inside a Prow job?

we'd need a GCP service account / service account key with access to this.

@chuckha
Copy link
Contributor

chuckha commented Mar 4, 2019

@dims I am not a sig lead but am working on the release pipelines for cluster-api and cluster-api-provider-aws. I'd like to have my work eventually end up in other providers, but am starting there. Would it be possible to be added to the cluster-api staging group? I'll request to join. No worries if it's a sig-leads-only policy.

@dims
Copy link
Member Author

dims commented Mar 4, 2019

@chuckha we are still waiting for a couple of pieces to go into prow, so yes please hang on :)

@justaugustus
Copy link
Member

justaugustus commented Mar 5, 2019

@dims -- Hiya! We (capz) are also interested in getting access to push images to staging and production GCR buckets. :)

ref: kubernetes-sigs/cluster-api-provider-azure#118

@spiffxp
Copy link
Member

spiffxp commented May 15, 2019

There is a script that creates repos, not yet a nice yaml file, but editing a script means it create repos

@spiffxp
Copy link
Member

spiffxp commented May 15, 2019

Actually, we do want to give staging repos to subprojects.... can write into staging repo, promoter pushes into production repo

@spiffxp
Copy link
Member

spiffxp commented Jun 26, 2019

We are basically willing to say yes to people who ask for gcr repos. At the moment, we would point people to the PR's @nikhita did for publishing-bot. Consider refining into a README that people can follow, and that we have better coverage around reconciling once PRs reduces

@nikhita
Copy link
Member

nikhita commented Jun 26, 2019

At the moment, we would point people to the PR's @nikhita did for publishing-bot.

For reference: #282

Consider refining into a README that people can follow

created #286

@thockin
Copy link
Member

thockin commented Jul 8, 2019

Final tasks here are in-progress. Once e2e and DR are done, we can bulk-import and flip the vanity URL

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2019
@nikhita
Copy link
Member

nikhita commented Oct 6, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2020
@listx
Copy link
Contributor

listx commented Jan 4, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2020
@spiffxp
Copy link
Member

spiffxp commented Apr 3, 2020

/remove-lifecycle stale
/lifecycle frozen

Once the vanity domain flip (VDF) is complete and we have a little more documentation (and maybe automation) around how people can use / provision this, I think we're done here

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 3, 2020
@spiffxp
Copy link
Member

spiffxp commented Apr 15, 2020

/close
I'm going to call this done in favor of using #157 to track completion of the vanity domain flip and whatever else we need to feel confident about an image promotion process that uses this repo. The repo itself has been setup.

@k8s-ci-robot
Copy link
Contributor

@spiffxp: Closing this issue.

In response to this:

/close
I'm going to call this done in favor of using #157 to track completion of the vanity domain flip and whatever else we need to feel confident about an image promotion process that uses this repo. The repo itself has been setup.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests