-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate and migrate away from gs://kubernetes-release #2396
Deprecate and migrate away from gs://kubernetes-release #2396
Comments
/cc @kubernetes/release-engineering |
Blocked on #1375 |
/milestone v1.24 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/milestone clear |
This is ~done. After the next round of patch releases, we will drop write permissions to this old bucket. The new bucket is private and shielded behind the CDN donated to us by Fastly, which we are using to host binary downloads going forward. Thanks @ameukam for pushing this forward with the release engineering team and branch managers. |
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com>
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com> (cherry picked from commit 47d82ab)
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com> (cherry picked from commit 47d82ab)
Following this kubernetes/k8s.io#2396 we should have moved away a long time ago, now this change happened and the E2E tests are failing due to a wrong link to download the kubectl client. Signed-off-by: Jonathan Gonzalez V. <jonathan.gonzalez@enterprisedb.com> (cherry picked from commit 47d82ab)
@BenTheElder Why was this change not mentioned anywhere? We only found out when we tried to download kubectl v1.29.9 and it didn't work. This should at least have been included in the changelog. |
We did publish https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/ @rittneje If you used anything other than the official source: you should have switched. We don't formally changelog infrastructure changes in the release notes, but we deliberately timed the pkgs.k8s.io article to come out on the same day as https://kubernetes.io/blog/2023/08/15/kubernetes-v1-28-release/ |
We also published https://kubernetes.io/blog/2023/06/09/dl-adopt-cdn/ earlier in 2023 |
Kubernetes has been advertising all releases on dl.k8s.io for a LONG time now. We did not advertise them as being on
You should not depend on any non-documented details of Kubernetes or the project's infrastructure. See also: |
We still need to phase out the related GCB bucket and service account / project. (Which are further implemntation details of releasing). Thread in https://kubernetes.slack.com/archives/CJH2GBF7Y/p1729677695289489 |
@BenTheElder We can drop the IAM bindings for this bucket and leave it in read-only. |
The latter has been deprecated, and doesn't provide binaries for the latest releases. Also see kubernetes/k8s.io#2396. Signed-off-by: Julian Wiedmann <jwi@isovalent.com>
IAM bindings are being dropped, this is done. |
see also #1571 (comment) |
Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: #1571
This issue covers the deprecation of and migration away from the following google.com assets:
gs://kubernetes-release
living in GCP projectgoogle-containers
, in favor of the community-owned GCS bucketgs://k8s-release
living in GCP project TBD (currentlyk8s-release
)gs://kubernetes-release-asia
andgs://kubernetes-release-eu
, same as above butgs://k8s-release-eu
andgs://k8s-release-asia
insteadThese are not labeled as steps just yet because not everything needs to be completed to full fidelity in strict sequential order. I would prefer that we get a sense sooner rather than later what the impact of shifting dl.k8s.io traffic will be, in terms of how much budget, and what percentage of traffic that represents vs. hardcoded traffic.
Determine new-to-deprecated sync implementation and deprecation window
There are likely a lot of people out there that have
gs://kubernetes-release
hardcoded. It's unreasonable to stop putting new releases there without some kind of advance warning. So after announcing our intent to deprecategs://kubernetes-release
, we should decide how we're going to sync new releases back there (and its region-specific buckets)gsutil rsync
As for the deprecation window itself, I think it's fair to treat this with a deprecation clock equivalent to disabling a v1 API.
Determine gs://k8s-release project location and geo-sync implementation
gs://k8s-release
and its other buckets to prevent someone else from grabbing the name-eu
and-asia
buckets are not actually region-specific, and should be recreated as suchUse dl.k8s.io where possible and identify remaining hardcoded bucket name references across the project
The only time a kubernetes release artifact GCS bucket name needs to show up in a URI is if gsutil is involved, or someone is explicitly interested in browsing the bucket. For tools like
curl
orwget
that retrieve binaries via HTTP, we havehttps://dl.k8s.io
, which will allow us to automatically shift traffic from one bucket to the next depending on the requested URIsI started doing this for a few projects while working on #2318, e.g.
TODO: a cs.k8s.io query and resulting checklist of repos to investigate
Shift dl.k8s.io traffic to gs://k8s-release-dev
TODO: there is a separate issue for this.
We will pre-seed gs://k8s-release with everything in gs://kubernetes-release, and gradually modify dl.k8s.io to redirect more and more traffic to gs://k8s-release.
The idea is not to flip a switch, just in case that sends us way more traffic than our budget is prepared to handle. Instead, let's consider shifting traffic gradually for certain URI patterns, or a certain percentage of requests, etc. It's unclear whether this will be as straightforward as adding lines to nginx, or whether we'll want GCLB changes as well.
Change remaining project references to gs://k8s-release
/area artifacts
/area prow
/area release-eng
/sig release
/sig testing
/wg k8s-infra
/priority important-soon
/kind cleanup
/milestone v1.23
The text was updated successfully, but these errors were encountered: