Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate and migrate away from gs://kubernetes-release #2396

Closed
spiffxp opened this issue Jul 26, 2021 · 40 comments · Fixed by kubernetes-sigs/kubespray#10066 or kubernetes-csi/csi-driver-smb#614
Assignees
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters area/release-eng Issues or PRs related to the Release Engineering subproject kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Milestone

Comments

@spiffxp
Copy link
Member

spiffxp commented Jul 26, 2021

Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: #1571

This issue covers the deprecation of and migration away from the following google.com assets:

  • the google.com-owned GCS bucket gs://kubernetes-release living in GCP project google-containers, in favor of the community-owned GCS bucket gs://k8s-release living in GCP project TBD (currently k8s-release)
  • the region-specific GCS buckets gs://kubernetes-release-asia and gs://kubernetes-release-eu, same as above but gs://k8s-release-eu and gs://k8s-release-asia instead
  • TODO: are there container images involved here as well, or did we already address that with k8s.gcr.io?

These are not labeled as steps just yet because not everything needs to be completed to full fidelity in strict sequential order. I would prefer that we get a sense sooner rather than later what the impact of shifting dl.k8s.io traffic will be, in terms of how much budget, and what percentage of traffic that represents vs. hardcoded traffic.

Determine new-to-deprecated sync implementation and deprecation window

There are likely a lot of people out there that have gs://kubernetes-release hardcoded. It's unreasonable to stop putting new releases there without some kind of advance warning. So after announcing our intent to deprecate gs://kubernetes-release, we should decide how we're going to sync new releases back there (and its region-specific buckets)

  • gsutil rsync
  • Google Cloud Storage Transfer Service
  • etc.

As for the deprecation window itself, I think it's fair to treat this with a deprecation clock equivalent to disabling a v1 API.

Determine gs://k8s-release project location and geo-sync implementation

  • Someone (probably me) manually created gs://k8s-release and its other buckets to prevent someone else from grabbing the name
  • The -eu and -asia buckets are not actually region-specific, and should be recreated as such
  • We should decide how we're going to implement region syncing (same as above)
  • We should decide at this stage whether we want to block on a binary artifact promotion process, or get by with one of the syncing mechanisms from above

Use dl.k8s.io where possible and identify remaining hardcoded bucket name references across the project

The only time a kubernetes release artifact GCS bucket name needs to show up in a URI is if gsutil is involved, or someone is explicitly interested in browsing the bucket. For tools like curl or wget that retrieve binaries via HTTP, we have https://dl.k8s.io, which will allow us to automatically shift traffic from one bucket to the next depending on the requested URIs

I started doing this for a few projects while working on #2318, e.g.

TODO: a cs.k8s.io query and resulting checklist of repos to investigate

Shift dl.k8s.io traffic to gs://k8s-release-dev

TODO: there is a separate issue for this.

We will pre-seed gs://k8s-release with everything in gs://kubernetes-release, and gradually modify dl.k8s.io to redirect more and more traffic to gs://k8s-release.

The idea is not to flip a switch, just in case that sends us way more traffic than our budget is prepared to handle. Instead, let's consider shifting traffic gradually for certain URI patterns, or a certain percentage of requests, etc. It's unclear whether this will be as straightforward as adding lines to nginx, or whether we'll want GCLB changes as well.

Change remaining project references to gs://k8s-release

/area artifacts
/area prow
/area release-eng
/sig release
/sig testing
/wg k8s-infra
/priority important-soon
/kind cleanup
/milestone v1.23

@k8s-ci-robot k8s-ci-robot added area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters labels Jul 26, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.23 milestone Jul 26, 2021
@k8s-ci-robot k8s-ci-robot added area/release-eng Issues or PRs related to the Release Engineering subproject sig/release Categorizes an issue or PR as relevant to SIG Release. sig/testing Categorizes an issue or PR as relevant to SIG Testing. wg/k8s-infra priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. labels Jul 26, 2021
@puerco
Copy link
Member

puerco commented Aug 12, 2021

/cc @kubernetes/release-engineering

@spiffxp
Copy link
Member Author

spiffxp commented Sep 29, 2021

Blocked on #1375

@k8s-ci-robot k8s-ci-robot added sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. and removed wg/k8s-infra labels Sep 29, 2021
@spiffxp
Copy link
Member Author

spiffxp commented Nov 24, 2021

/milestone v1.24

@k8s-ci-robot k8s-ci-robot modified the milestones: v1.23, v1.24 Nov 24, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@ameukam
Copy link
Member

ameukam commented Feb 22, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@ameukam
Copy link
Member

ameukam commented May 12, 2022

/milestone clear
/lifecycle frozen
/priority backlog

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label May 12, 2022
@BenTheElder
Copy link
Member

This is ~done. After the next round of patch releases, we will drop write permissions to this old bucket.

The new bucket is private and shielded behind the CDN donated to us by Fastly, which we are using to host binary downloads going forward.

Thanks @ameukam for pushing this forward with the release engineering team and branch managers.

sxd added a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we should
have moved away a long time ago, now this change happened and the E2E tests
are failing due to a wrong link to download the kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
sxd added a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we should
have moved away a long time ago, now this change happened and the E2E tests
are failing due to a wrong link to download the kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
sxd added a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we
should have moved away a long time ago, now this change happened
and the E2E tests are failing due to a wrong link to download the
kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
cnpg-bot pushed a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we
should have moved away a long time ago, now this change happened
and the E2E tests are failing due to a wrong link to download the
kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
(cherry picked from commit 47d82ab)
cnpg-bot pushed a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we
should have moved away a long time ago, now this change happened
and the E2E tests are failing due to a wrong link to download the
kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
(cherry picked from commit 47d82ab)
cnpg-bot pushed a commit to cloudnative-pg/cloudnative-pg that referenced this issue Oct 13, 2024
Following this kubernetes/k8s.io#2396 we
should have moved away a long time ago, now this change happened
and the E2E tests are failing due to a wrong link to download the
kubectl client.

Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
(cherry picked from commit 47d82ab)
@rittneje
Copy link

@BenTheElder Why was this change not mentioned anywhere? We only found out when we tried to download kubectl v1.29.9 and it didn't work. This should at least have been included in the changelog.

@sftim
Copy link
Contributor

sftim commented Oct 23, 2024

We did publish https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/ @rittneje

If you used anything other than the official source: you should have switched. We don't formally changelog infrastructure changes in the release notes, but we deliberately timed the pkgs.k8s.io article to come out on the same day as https://kubernetes.io/blog/2023/08/15/kubernetes-v1-28-release/

@sftim
Copy link
Contributor

sftim commented Oct 23, 2024

We also published https://kubernetes.io/blog/2023/06/09/dl-adopt-cdn/ earlier in 2023

@BenTheElder
Copy link
Member

@BenTheElder Why was this change not mentioned anywhere? We only found out when we tried to download kubectl v1.29.9 and it didn't work. This should at least have been included in the changelog.

Kubernetes has been advertising all releases on dl.k8s.io for a LONG time now. We did not advertise them as being on gs://kubernetes-release.

gs://kubernetes-release used to be an implementation detail of dl.k8s.io, and we announced that we'd be serving through fastly more than a year ago. Now we're no longer publishing to it and only to the new community owned CDN backing bucket which is not public read, please use the CDN. kubernetes-release is an old GCS bucket internal to google's GCP projects so depending on it prevents us from putting the community at large in control of releases (which google is very much still funding, but is not solely controlling).

You should not depend on any non-documented details of Kubernetes or the project's infrastructure.

See also:
kubernetes/kubernetes#127796 (comment)
https://kubernetes.io/blog/2023/06/09/dl-adopt-cdn/ (which had a banner on the kubernetes.io website)
https://registry.k8s.io#stability

@BenTheElder
Copy link
Member

We still need to phase out the related GCB bucket and service account / project. (Which are further implemntation details of releasing). Thread in https://kubernetes.slack.com/archives/CJH2GBF7Y/p1729677695289489

@ameukam
Copy link
Member

ameukam commented Nov 21, 2024

@BenTheElder We can drop the IAM bindings for this bucket and leave it in read-only.

julianwiedmann added a commit to cilium/cilium that referenced this issue Dec 2, 2024
The latter has been deprecated, and doesn't provide binaries for the
latest releases.

Also see kubernetes/k8s.io#2396.

Signed-off-by: Julian Wiedmann <jwi@isovalent.com>
@BenTheElder
Copy link
Member

BenTheElder commented Dec 18, 2024

IAM bindings are being dropped, this is done.

@github-project-automation github-project-automation bot moved this from In Progress to Done in SIG K8S Infra Dec 18, 2024
@BenTheElder
Copy link
Member

see also #1571 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters area/release-eng Issues or PRs related to the Release Engineering subproject kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
Status: Done