-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prow: Remove old branched openshift/os:latest build #1000
prow: Remove old branched openshift/os:latest build #1000
Conversation
We're moving to a container in the rhcos/ namespace; see https://github.com/openshift/release/issues/972 This is part of an effort to consolidate the "source of truth" for RHCOS content. We'd like to get to one container, and short term, having it built from the same process that builds other content internally, and pushed to the osorgci registry makes things far easier.
See also openshift/os#138 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
How are you going to build the image? It's built here and then mirrored into rhcos.... |
Where is the code that does the mirroring? After openshift/os#138 we're overwriting the This is an incremental step; after this, we're going to work on having the container be the canonical source of data. There's a lot of nontrivial bits to this, and getting down to one current source of truth helps. |
So you're not using the cluster to build images, but your jenkins jobs? |
Yes but only short term. And really those aren't orthogonal; we can probably now consider moving all of the build infra out of internal (we'd definitely continue to use some Jenkins but as a pod obviously). We moved that direction for our initial work we had expertise/inertia/tooling/investment there. I've been playing a bit with GCE nested virt and I think that's pretty viable for both testing and vm-image building. Existing CL (as well as Quay) has various packet.net investment which we can continue as well. Incremental steps here are the anyuid sa or we could go ahead and add a privileged one too. |
I think have there is a lot of value in having one build that is used rather than 2 possible builders pushing to 1 or 2 places. I propose we slim it to 1 build and push where at least a subset of members of the CoreOS teams as well as Clayton (and others who need it) have access to modify the build and push. Note: This discussion is a prereq for continuing work for container content work. |
Whats' the status of non-privileged ostree builds inside of containers? I ask because we are ramping up to get CI jobs in place that pull together content, mostly from ci-operator and other tools we build on top, into the prototypes of the entire release content tree. As of a few days ago, every origin 3.11 image was built and pushed on openshift into a unified output location. I'd like to start integrating coreos jobs into that matrix. Knowing that timeline helps make the decision about whether to maintain these jobs. And the bigger context is that I would like to do the unified flow demo in late July that shows the entire openshift 4 new flow together, and so I'm trying to get CI infra in place now to prop it up once we show it. We can do it with an RHCoS AMI, but I would much prefer to show it as we want customers to interact (which includes getting the token from the website, plumbing it through with the pivot, bringing up the cluster, and then delivering an update). |
I agree with the medium-term vision there but...my feeling here is that the non-privileged builds don't add direct user/customer value, we've been trying to just get Ignition integrated and we still have major outstanding items like SELinux+Ignition. A much bigger win on this side IMO is getting RPM-style builds into OpenShift (instead of koji), which has the same privilege problem today, but it's fixable even more easily than for host-ostree builds. I guess a deep problem is in trying to support both pivot and non-pivot flows. And in particular - there's an obvious fundamental clash between pivoting and Ignition that briefly came up before, but we haven't been exploring much yet. Most of our Ignition testing has been non-pivot since it's just easier. I broke that issue out here: openshift/os#148 Anyways I feel like this is an important discussion to have, but...it's much higher level than this which is about cleaning up our build system right? Like I said I'd like to use the CI cluster more but it'd be helpful to make this incremental step first. |
Talked with colin - we clarified a mid year / late year priority (october?) made sense, since the short term work is much more important. We'll touch base in a month or so. /lgtm |
@cgwalters: Updated the
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Deprecate CentOS 7
* Drop CentOS 7 support completely CentOS 7 was marked deprecated in openshift#1000. Everyone really needs to be on RHEL 8 or CentOS 8 at this point. Drop CentOS 7 support completely now that a warning has been in place for a bit. RHEL 8 (or CentOS 8) is required because that's the environment we would expect customers to use to run the installer. Any issues with CentOS 7 are not worth our focus anymore. * Drop dead code after removing centos 7 support firewalld is now always in use after dropping CentOS 7.
We're moving to a container in the rhcos/ namespace; see
https://github.com/openshift/release/issues/972
This is part of an effort to consolidate the "source of truth" for RHCOS content.
We'd like to get to one container, and short term, having it built from
the same process that builds other content internally, and pushed to the osorgci registry
makes things far easier.