-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MCO Flattened Ignition proposal #467
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hardys The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
4589ede
to
91da769
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's close openshift/machine-config-operator#1690 in favor of this please.
Overall...I'm OK with this. The text notes that long term we want to move baremetal IPI to the Live ISO which solves a whole lot of problems in this area.
a951ebc
to
b3172ca
Compare
Now closed.
Thanks for the review! Yes I added clarification that long term we want to address that overlap. Sounds like we're OK to proceed with some implementation work on this pending further reviews, I'll take a look and get something up as a WIP PR. There are two remaining open questions that will influence the implementation: What kind of resource should we use to store the flattented config - I'm thinking a MachineConfig like the rendered config? How do we access the flattened config in the machine-api actuator, given that the MCS isn't accessible from services running on the cluster? I'm thinking we ensure it's possible to derive which config to use (either by role or some additional metadata) for any given machine, then access the resource directly via the kube API (will the machine controller API creds allow this, given that the namespaces are different?) |
/retitle Add MCO Flattened Ignition proposal |
Co-authored-by: Antoni Segura Puimedon <celebdor@gmail.com>
b3172ca
to
54fce76
Compare
Updated to account for feedback so far (thanks!) - also added more design details based on initial investigations, and some details of CI coverage we can use to test this. |
|
||
### Version Skew Strategy | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another alternative could be to just merge the installer generated ignition config into the rendered config (I guess we'd need to create a MachineConfig object derived from the installer generated user-data secret, then rely on the existing MergeMachineConfigs logic in MCO.
@runcom that would also solve the issue with the management of the pointer ignition, since we'd convert any data from the installer-created ignition to a MachineConfig, thus the pointer config can just be statically tempated as in your recent implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify this idea, something like:
- Installer creates master-user-data (as it does today, may contain user ignition customizations)
- MCO parses master-user-data and creates a MachineConfig with the ignition data
- MCO generates master-user-data-managed from template (as implemented in Manage the ignition stub config machine-config-operator#1792 but reverted via Bug 1881703: Revert https://github.com/openshift/machine-config-operator/pull/1792 machine-config-operator#2126 due to breaking the installer interface)
- Any installer provided customizations show up in the existing rendered config (same as if the user had provided them via MachineConfig manifests)
We could potentially flag some warning when this happens, and use it as a migration path to eventually deprecate the direct ignition-configs customization in favor of MachineConfig manifests?
This would also solve the issue for IPI baremetal without any MCO API changes, since we could consume the existing rendered config directly (we did this previously ref openshift/installer#3276 but that was also reverted for the same reason as above)
@cgwalters @crawford any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess there are some RBAC considerations:
- The current user-data secret ends up in the openshift-machine-api namespace - Manage the ignition stub config machine-config-operator#1792 worked around that by generating the pointer ignition manifest in the right namespace
- We could adjust the installer to write future user-data to the openshift-machine-config-operator namespace (so it can be read and converted into a MachineConfig resource by the MCO, or perhaps the installer just writes the data in that format?)
- On upgrade the plan from Manage the ignition stub config machine-config-operator#1792 remains, e.g just reference the existing non-managed user-data secret?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at the code, and it's probably simplest to have the installer generate a MachineConfig manifest that contains any config provided via create ignition-configs
That avoids any changes to the MCO (other than restoring the reverted pointer-ignition change from @runcom) and potentially allows us to warn the user when config is provided via that interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm generally in favour of the MCO managing the stub configs coming from installer in one way or another, and this approach will probably work. I think we should keep the following points in mind:
- Since the stub configs were never changed before we never considered any dependency between the MAO and the MCO. This method would likely (as I understand it) introduce an ordering where the MCO has to process the stub config first before machines can be booted.
- We implicitly supported "per-node" configuration with the stub config, for example after you generated the stub config, you could configure each node's networking separately before boot with a different static IP via the ignition stub, and the MCO would be fine with it since it had no understanding of what existed in the stub. I guess for this example, the generated configs with
openshift-installer create ignition-configs
would have been supplied to the nodes manually. Is this a case we care about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good points, I care about solving point number 2 as it seems we really never wanted to advertise the use of that (last we checked with @crawford )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm generally in favour of the MCO managing the stub configs coming from installer in one way or another, and this approach will probably work. I think we should keep the following points in mind:
1. Since the stub configs were never changed before we never considered any dependency between the MAO and the MCO. This method would likely (as I understand it) introduce an ordering where the MCO has to process the stub config first before machines can be booted.
I think we can avoid this if instead the installer creates a MachineConfig manifest with any user customizations it finds on the create cluster
phase, and we could either skip generating this or leave it empty in the case where the stub config output via the create ignition-configs
phase is unmodified?
We can also warn the user if the decision is made to deprecate the modification of the stub config with this approach (which may be harder if we have the MCO process the stub config, in addition to the ordering concern you raise above).
2. We implicitly supported "per-node" configuration with the stub config, for example after you generated the stub config, you could configure each node's networking separately before boot with a different static IP via the ignition stub, and the MCO would be fine with it since it had no understanding of what existed in the stub. I guess for this example, the generated configs with `openshift-installer create ignition-configs` would have been supplied to the nodes manually. Is this a case we care about?
Good point - IIUC this has only ever worked for the UPI case, where you download the stub config from the installer create ignition-configs
phase, then host some per-host modified configs somewhere outside of the cluster?
In the IPI case, there's only a single secret per role, so per-node config is not currently supported via that workflow.
So, I think to ensure both workflows continue to work the same as today, we just need to ensure that the new MachineConfig object either isn't generated or is empty at the create ignition-configs
stage, and that we can detect and re-generate that asset in the case where some user-customization happened between create ignition-configs
and create cluster
(I guess we can compare the assets loaded from file with that generated inside the installer).
This is an alternative to openshift#467
This is an alternative to openshift#467
This is an alternative to openshift#467
Superseded by #540 |
This is an alternative to openshift#467
This is an alternative to openshift#467
This is an alternative to openshift#467
This is an alternative to openshift#467
Draft proposal to add MCO management of a "flattened" ignition configuration - this could be used to break the chicken/egg problem evident in some network configurations (particularly in the case of baremetal), e.g https://bugzilla.redhat.com/show_bug.cgi?id=1824331