Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
baremetal: Include CoreOS ISO in the release payload #909
baremetal: Include CoreOS ISO in the release payload #909
Changes from 1 commit
1264dfd
d9ef46a
29940bf
0cbcc14
dc30aea
308574b
35c0e09
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I think it'd be clearer if we created a new repo/image than re-purposing either of these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a plan for resolving exactly how this process will work, and do we anticipate needing different solutions for upstream (openshift CI, not OKD) images vs downstream (and nightly) builds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll need to pull in some folks from the appropriate team (ART?) to tell us the right way to accomplish it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the good news is that there is a mechanism for ART to download and insert binaries like this if that is in fact what we determine to do. we would just need a script written to do it at the time we rebase the source.
the bad news is that it deepens the dependency tree of things that need to align in a release and falls outside of how we currently determine what needs building.
dependency tree: currently we have hyperkube building as an RPM, getting included in an RHCOS build, and that build then getting included in a nightly. this change would mean that for consistency, baremetal would also need to rebuild (after RHCOS) and be included in a nightly. there are lots of ways for that to go wrong and we would end up with mismatching content. i would say we could validate that before release but i'm actually not sure how we would, would have to think about it. in any case, there would likely be sporadic extended periods where the two were out of sync and we couldn't form a valid release. oh, and once we get embargoed RHCOS sorted out I have no idea how we'll get baremetal to match (admittedly a rare problem, perhaps a manual workaround will suffice).
detection that the image needs rebuilding: currently we check whether the source has changed, whether the config has changed, or whether RPM contents have changed, which are all things we can easily look up from the image or brew. there's nothing like that for other content. i guess to do this, the process that does the upload could add a label with the RHCOS buildid that's included, and we could then compare that to see if there's a newer build available. i don't look forward to adding that bit of complexity into the scan but it seems solvable.
some other things to consider:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far baremetal only supports x86_64. But it seems inevitable that one day we'll have to support multiple architectures. Could we have one Dockerfile per arch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should always keep this ISO the same as the one pinned in the installer.
So one idea: have this image derive from the installer. I think we'd get edge triggering for free then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config has baremetal building for all arches so i assumed those were used but it's simpler if not. we should probably limit those builds to x86...
our build system insists that a single multi-arch build reuse the same dockerfile for all (this is useful for ensuring they all have consistent content). there are ways around it, though, involving either complicating the dockerfile a bit, or (more likely) splitting the builds into a build per arch - we already have some like this for ARM64. so I guess this is not a blocker, just another complication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keeping the ISO same as installer seems like a good idea. that would mean it's not changing frequently.
i'm not quite sure how we'd implement that; i'm not sure how we'd determine what to download prior to the build, and we can't download during the build. except maybe from brew... heh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1: For OKD, the ostree in machine-os-content is different than the one used in the bootimage (which is FCOS). For the okd-machine-os ostree, the hyperkube and client RPMs are extracted from the artifacts image, and layered onto FCOS: https://github.com/openshift/okd-machine-os/blob/master/Dockerfile.cosa#L9. Since the trees differ, a new node in OKD always has to pivot from FCOS to okd-machine-os before it even has a kubelet do run anything with.
2: This will be needed soon at least for ARM64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a paragraph on multi-arch to the doc.
@sosiouxme what additional information do you think we need to document here to get this to an approvable state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my concerns are answered.
ART can define a hook such that each time we build, we check if
installer/data/data/rhcos-stream.json
has changed. If it has, download the ISOs from the locations given there and make them available in the build context with an arch-specific name, probably something likerhcos.$arch.iso
such that the same Dockerfile can just use the current arch to pick the right one in the arch-specific builds.