-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate kettle to k8s-infra #787
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Migrating kettle most likely looks something like
|
FYI @MushuEE |
When you say
is that to a new project? What is the: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Ref: kubernetes#787 Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
This still needs to happen before the prow default cluster shutdown in August and sooner is better. |
So we still have one " It is running "kettle" and "kettle-staging" deployments with one pod each. Each of those has a PD-SSD, 3001 and 201 GB respectively. There are some bigquery datasets in this project, build/all is 1.67 TB. |
Given initially ingest this data from the prow GCS logs, I think we should probably look at cold-starting a new instance running in AAA, just overriding the cluster/project and deploying with the existing tooling. There's a lot to be desired around auto deployment etc however |
I think @dims has this working, one remaining item will be when we're confident this is done let Googlers know and we'll see about turning down the old instance / GCP project ... (FYI @michelle192837 @cjwagner) |
@BenTheElder i want to watch it for a week before we can call it done! |
Exciting stuff! :D Thanks y'all! |
[I scaled the old cluster down to zero this week, we'll check back next week] |
thanks @BenTheElder |
https://storage.googleapis.com/k8s-triage/index.html is being updated. and the flakes json looks good as well
We can turn down the old cluster early next week @BenTheElder |
SGTM. At some point I'd like to turn down the bigquery datasets and anything else lingering in that project as well. |
/assign |
@BenTheElder also update #1308 and close it ? 🥺 |
remaining follow up will be tracked in #1308 |
Part of migrating away from gcp-project k8s-gubernator: #1308
My suggestions for target:
/wg k8s-infra
/area cluster-infra
/sig testing
The text was updated successfully, but these errors were encountered: