-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backfill images from google-containers #623
Conversation
I'm surprised the The dry run finished without an error (I downloaded the logs and ignore-case-grepped for "fail" and "error" and found nothing), so, I think it's good for merge. |
As we get new images into I assume |
@listx -- Yes, please. It'd be great to have those first-class images in their own config file; but agreed that that can be a follow-up. |
For SIG Release: |
This is part of https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/Vanity-Domain-Flip.md, in preparation for the domain flip. Backfilling all images to the top-level (root) directory ensures that the transition for existing images post-flip will be painless.
I had to update one of the tags in the images.yaml to reflect a manual change I just performed for @justaugustus you can generate the EXACT same cd
git clone https://github.com/kubernetes-sigs/k8s-container-image-promoter go/src/sigs.k8s.io/k8s-container-image-promoter
cd go/src/sigs.k8s.io/k8s-container-image-promoter
bazel run \
--workspace_status_command=$PWD/workspace_status.sh \
--host_force_python=PY2 //:cip -- -snapshot=gcr.io/google-containers \
-output-format=YAML > images.yaml Can you do the above as a sanity check? |
A |
/hold cancel |
/hold |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please get a comment in each file, and a README.md in each directory explaining that these images are effectively frozen, new changes to them will be rejected, and any new promotions MUST happen through individual sub-project staging repos?
Specifically, I want there to be NO AMBIGUITY about whether we want PRs against these, and when the next wave of community maintainers step up, they don't lose all the context.
This was promoted around 2020-03-03 05:41 -08:00 from the Google-internal promoter.
This way, users will be forced to read this message even if the aren't looking at the README.md.
Dropped .gitignores, but also added a "DO-NOT-MODIFY" prefix to the folder names to be that much more explicit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this kick off a giant import?
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justaugustus, listx, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Just to weave the tracking together a little... |
Yes. It will be interesting to see how ~50K images actually get copied over. |
Make it so!
…On Tue, Mar 3, 2020 at 4:29 PM Linus Arver ***@***.***> wrote:
Will this kick off a giant import?
Yes. It will be interesting to see how ~50K images actually get copied over.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
...and away we go! |
I was about to make a joke about pushing to prod at 5pm, but it
merged....here we go....
…On Tue, Mar 3, 2020 at 4:49 PM Kubernetes Prow Robot < ***@***.***> wrote:
Merged #623 <#623> into master.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#623?email_source=notifications&email_token=ABKWAVFKPJVWAY7Z7ZADJMTRFWQS3A5CNFSM4K76DGRKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOXBY5X5A#event-3094469620>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVDJRHHX7AKBZAUGD23RFWQS3ANCNFSM4K76DGRA>
.
|
It is still chugging along after the 1h mark now. Interestingly, the Prow job logs have been truncated (it now says only 387~ lines of logs, whereas I expect MBs of logs). @cjwagner I guess super long Prow jobs' logs getting truncated is a known (perhaps on-purpose by design) issue? |
The job has been killed by Prow because it exceeded the 2h limit. I'm pasting all my findings below, though, because I have some thoughts I should collect: https://docs.google.com/document/d/1R7RhuKwOUHsi4i3RMpaSPYLJCv16EazSww5kUQXRPlk/edit?usp=sharing TL;DR is that I really have to fix kubernetes-sigs/promo-tools#185. |
Cool failure mode.
Did we get a clean signal that it failed?
Will a periodic run pick up the debris and carry on cleanly?
Or does a human have to intervene?
…On Tue, Mar 3, 2020, 8:25 PM Linus Arver ***@***.***> wrote:
The job has been killed by Prow because it exceeded the 2h limit. I'm
pasting all my findings below, though, because I have some thoughts I
should collect:
https://docs.google.com/document/d/1R7RhuKwOUHsi4i3RMpaSPYLJCv16EazSww5kUQXRPlk/edit?usp=sharing
TL;DR is that I really have to fix
kubernetes-sigs/promo-tools#185
<kubernetes-sigs/promo-tools#185>
.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#623?email_source=notifications&email_token=ABKWAVFSBAM5TQTCCYQN46LRFXJ35A5CNFSM4K76DGRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENWJR2A#issuecomment-594319592>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVHVWQOOKS4WYE6XGUTRFXJ35ANCNFSM4K76DGRA>
.
|
Note that this is just the default timeout: You can configure the job to have a different timeout like this: https://github.com/kubernetes/test-infra/blob/28c7418a8c8145a2b0203591b4a69262804e283a/config/jobs/kubernetes/kops/kops-presubmits.yaml#L13-L14 |
Yes. The promoter is designed to only perform required promotions. That being said I should open a PR to make the periodic job run more frequently than once a day. |
OK so the promoter is hitting an index-out-of-range error from the scheduled (daily) ci-k8sio-cip job run: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-k8sio-cip/1235320791898263552 I can reproduce locally. Will dig to see which commit in this repo caused it (I'm assuming that this PR itself did not cause this issue, but I'll have to see). |
So for some reason the read is failing because of this merge. Reverting this PR to unblock the promoter for possible other image promotions while I work on a fix. Revert PR here: #629 |
This reverts commit ecfc539, reversing changes made to 670fb0c. This reverts kubernetes#623. For some reason, with this change there is now an index out-of-bounds failure in the ci-k8sio-cip job here: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-k8sio-cip/1235320791898263552. I am puzzled why this did not manifest itself in the post-k8sio-cip job (postsubmit) when kubernetes#623 was merged. However, the simple fix is to revert while I figure this out.
Revert "Merge pull request #623 from listx/master"
The error is happening because of a parsing failure in the function SplitByKnownRegistries. See more details here: kubernetes-sigs/promo-tools#188 |
This reverts commit 0ec285c. This is the second time we are trying to backfill with legacy images. The first attempt failed, as detailed here: kubernetes/kubernetes#88553
This is part of
https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/Vanity-Domain-Flip.md,
in preparation for the domain flip. Backfilling all images to the
top-level (root) directory ensures that the transition for existing
images post-flip will be painless.
Holding for now because I'm curious to see what the presubmit checks have to say.
/hold
/cc @thockin @justaugustus @tpepper