-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Adding a Network TUI #427
Comments
OK so I will keep disagreeing that DHCP is an "assumption" - it's more that it's the default. It's completely supported to use Second, this definitely isn't somehow specific to RHCOS - I don't see a reason for the model to deviate between the two given the strong overlap in use cases. And among other things, OKD 4 exists and uses FCOS, and e.g. people are already stumbling over things only fixed in RHCOS. You're mention UX and I agree with that! For the cases where people are catching grub prompts, that's clearly awful and we need to address it. OK now there's a whole pile of prior art and related discussion on this. Just a few of the ones I've linked previously: One point I want to highlight is that I've seen for example some people say "Ignition requires networking". That's not actually true per the first issue above (when one gets a config injected at |
Now we've discussed some VMware and "non-PXE/DHCP bare metal" cases as falling into this. I think for VMWare the main solution is to have a sane way to provide hypervisor metadata to configure the network - which was discussed extensively but all of that is buried in some RHT-internal google doc I believe. For the "non-DHCP metal" cases, there's a whole lot of variety in this but I think a good one to focus on is systems provisioned via IPMI, in particular where the administrator can log into the management console and attach an ISO image. With the new install model, I think a good solution for this is using A variant of this is creating the same customized ISO images, but having an administrator physically present in the data center walking up to a machine with a crash cart or equivalent, sticking in a USB stick flashed with that ISO, booting from it, and then again - done. |
It's always valid for an Ignition config to request additional resources from the network. A particular config might not do so, so networking isn't always necessary, but in general we can't assume that.
We have been told that we cannot require customized machine-specific images, and that some users expect to type in their network configuration at the console. |
Is there any reason why e.g.
OK. So my proposal for those cases is that we boot into a live system, user can run whatever tooling they want to generate an Ignition config and/or a network config. In either case, we mount And this "boot into live system on failure" might be the default on certain platforms like |
Ignition by design is to apply the desired state. If the Ignition is unable to fetch the files or setup the disk, then it fails. The question of resolving is mooted by your second point of booting into a live-system: why do that just for the fetch? What about files or disks? |
Right, I know? What I'm arguing for is basically to support providing a network and/or Ignition config interactively. Ignition effectively wouldn't have run the first boot other than to determine no config was provided.
I don't understand - or maybe I am not explaining the idea well enough. In this proposal basically
So in this "no config" case, the files stage only writes the bits to enable volatile/autologin; disks does nothing. The system boots live and with autologin. User can manually generate network config and/or drop a config into |
Mostly just space. A referenced file could be arbitrarily large. |
An important sub-thread in this is though - if a user is manually typing in e.g. network config, how did they configure where to get Ignition? For vmware today, that's a guestinfo property AFAICS. I think the fact that the Ignition config is provided via a property strongly argues for doing the same for networking. In what scenario can one provide a property for the config location but need to use a console for network? (Other than the fact we haven't implemented fetching network configs from a property on vmware) For bare metal...is there a scenario where we know where to find Ignition, but the admin can't configure networking in the same way? (Usually this is the kernel cmdline). I don't think there is one after we fix coreos/coreos-installer#164 |
So to rephrase and tie together what I'm arguing here: We should support configuring where to find Ignition and the base network config in a symmetrical way. If we're going to do some sort of "fail into a console and run nmtui" or whatever for networking, why not support the admin also entering the Ignition config URL there too? And the same for bare metal. And because we support injecting Ignition into the ISO, we should support injecting network config too. |
I think the answer is the same for both bare metal and (to a lesser extent) VMware: it might be possible to have one customized image (e.g. Yeah, we'll probably need to allow typing in an Ignition config URL too. But the user may want to embed the Ignition config (or URL) and type in the network config. |
All things considered, @cgwalters your idea has tremendous merit. The question of it just being networking is only part of that. I would suggest that we should fail the fetch for all platforms in a similar way.
|
Its funny, though, because this changes a fundamental idea of Ignition based systems: no user-interaction. And if we're opening the door to user interaction, let's give an opportunity to fix whatever is needed. |
I am only suggesting doing this on two platforms. We wouldn't fail into an interactive console on e.g. AWS because it doesn't even have one. |
added a |
Here's more thoughts on this. In cases where we can fetch Ignition without a network (
which we'd then drop into Or alternatively, go fully general and support a To do this we would have to remove the If we implemented this, then It would also obsolete this PR. |
In parallel to this ticket I was going through a PoC for I think that longer term this kind of logic could fit into an Couple of things I saw while going through this:
|
Only if we want to hook into the "legacy" networking code and the initqueue stuff. If we use NM-in-initrd in a modern way as a systemd unit, then we can clearly express ordering here by just having
I could imagine having separate systemd unit defaults for the bootup sequence depending on whether or not Ignition needs network by default or something. Though, I think a generator would work too. |
RFE for this: https://bugzilla.redhat.com/show_bug.cgi?id=1814038 |
Also related #279 |
Closing this as the Request for Comment has been satisfied. |
Replying to the comment at coreos/afterburn#379 (comment):
I agree (though I am not fully confident on the design choice of having a different firstboot network config). If there is general consensus that we are solid with the above design, should I go ahead and start tracking tickets to move into Ignition 1) initrd network bringup and 2) cloud hostname setup? |
Followup on my previous comment: neither @jlebon nor @bgilbert think that the logic drafted at coreos/afterburn#379 should be placed in Ignition. Sentiment is to keep Ignition less distro-opinionated, so they'd prefer placing the kargs augmenting feature somewhere else (e.g. the draft in Afterburn would be fine). |
For the bare-metal targets, FCOS and RHCOS both, make the assumption that DHCP will serve instance with network identity. In the case of RHCOS, we have found that this assumption does not hold true.
Previously we have instructed users to:
ip=
kargsFeedback from users is that the UX experience is painful; we have been asked to come up with a more ergonomic method.
Requirements for the solution include:
I would like to encourage a robust discussion. Please bring the pitchforks out.
The text was updated successfully, but these errors were encountered: