Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide minimal ISO #661

Closed
cgwalters opened this issue Nov 4, 2020 · 19 comments
Closed

Provide minimal ISO #661

cgwalters opened this issue Nov 4, 2020 · 19 comments

Comments

@cgwalters
Copy link
Member

This came up in the RHCOS context - before https://github.com/openshift/enhancements/blob/master/enhancements/rhcos/liveisoinstall.md we had a small "installer.iso" that was basically just the initramfs plus a coreos-installer shell script.

For FCOS and RHCOS 4.6+ the ISO is pretty large and in some cases attaching it via virtual media to a management console can be very slow.

The RFE here is to provide something like a -net.iso that's basically the ISO without the rootfs. Along with coreos/coreos-installer#341 the user would configure the ISO networking via kargs and the location of the rootfs, plus embed Ignition.

Another variant here is to also embed coreos-installer in the initramfs for this - it'd increase the size but also mean that for cases where one literally just wants to do an install and not do anything fancy with podman (or Ignition in general), the install could be started via kargs as well and there'd be no need to mirror the rootfs.

@yrobla
Copy link

yrobla commented Nov 4, 2020

Apart from that , i'd need to have some flexibility to do some configuration:

  1. Advanced network config - i'd like to inject an ignition file (or equivalent kargs that can do the following:
  • decide on which interfaces turn dhcp on/off - we canno allow to have all dhcp interfaces enabled by default
  • add static ip/gateway/dns configuration for specific nic (primary interface)
  • create bonds
  • define vlans for the primary interface/bond
  • customize the hostname
  1. Endpoint customization:
  • the call-home endpoint needs to be customized. It will depend on specific information per cluster, so we need to be able to define it beforehand (url + get parameters likely)
  • additionally we may need to do some introspection process (for example, getting mac address of primary ip, getting some dhcp information...) and be able to pass it as get/post parameters to the call-home endpoint
  • specify the disk where to copy the content retrieved by the endpoint and copy to that disk

In order to do that, i wanted to inject a local ignition file (i cannot rely on external igntion.config_url as i have no network at that point). The ignition would contain files for configuring the network, the hostname... and a service that will perform this introspection and call-home

@cgwalters
Copy link
Member Author

I think most of your advanced networking is possible via kargs except bonds. One thing I raised in the past on this is - wouldn't it suffice for most cases to not bond just during the install process, and configure the bond via e.g. MachineConfig/Ignition that would be applied on the target system after it's installed? The install phase is very "transient", it doesn't seem strictly necessary to me to have NIC redundancy there.

But I understand it's conceptually easier to just apply the same network configuration throughout in some cases; to do that would require us to add a mechanism to inject arbitrary network configuration into the initramfs, which we'd so far avoided needing.
(See a thread coreos/ignition#979 ).

@dustymabe
Copy link
Member

dustymabe commented Nov 4, 2020

This came up in the RHCOS context - before https://github.com/openshift/enhancements/blob/master/enhancements/rhcos/liveisoinstall.md we had a small "installer.iso" that was basically just the initramfs plus a coreos-installer shell script.

I understand the problem this is trying to solve but I'd really prefer to not increase the number of artifacts we ship. Maybe one option is to be able to run coreos-installer iso remove-rootfs or something (probably not possible).

I think most of your advanced networking is possible via kargs except bonds.

Bonds work via kargs just fine IIUC. Network teaming doesn't.

@yrobla
Copy link

yrobla commented Nov 4, 2020

And what about vlans? Is a must for us

@dustymabe
Copy link
Member

And what about vlans? Is a must for us

vlans work via kargs too. There might be corner cases but in general if you find something not working there either should already be a bug for it or one should be opened.

@cgwalters
Copy link
Member Author

I understand the problem this is trying to solve but I'd really prefer to not increase the number of artifacts we ship.

Size? UX considerations?

Maybe one option is to be able to run coreos-installer iso remove-rootfs or something (probably not possible).

Mmmm I'd say it's possible, it's basically extracting the content, removing the rootfs and rerunning mkisofs right? I'd be fine with that approach but it's clearly more heavyweight lifting than just the "direct write" style logic we have today and would also require coreos-installer to grow a dependency on the mkisofs tool (I doubt there's a Rust implementation and even if there were I'm not sure about switching...okay a quick search turns up mkisofs-rs but still)

@bgilbert
Copy link
Contributor

bgilbert commented Nov 4, 2020

We already have substantial complexity around our install options, between live ISO, live PXE with appended initrd, live PXE with downloaded initrd, live Ignition config, network configuration with kargs, runtime network configuration with NM, etc etc. I think we'd need a compelling reason to add additional complexity to the feature matrix we need to maintain or that our users need to navigate.

I'm also not seeing that this proposal solves any problems. Any bits we remove from the ISO image are additional bits we need to download before we can install to disk, which just makes the ISO non-self-contained and either a) generates more Internet traffic or b) forces the user to locally host an additional artifact. If the goal is to reduce the number of bits that have to be pushed through the system firmware, I'd note that the ISO+management console install method has always been somewhat of a fallback. Large installations should generally use PXE install instead, which already has a mechanism coreos.live.rootfs_url to accomplish exactly this.

I think our time would be better spent reducing package dependencies to decrease the size of the OS image.

@yrobla
Copy link

yrobla commented Nov 4, 2020

Our case is related with remote clusters or remote worker nodes, where we do not have l2 connectivity , just l3. So the only option for booting a blank servers is via bmc - virtualmedia.
Virtualmedia has problems...depending on vendor we may have size limits , or the transport used can be flaky or slow. That is the reason why we need to limit the iso size as maximum- we have some cases of 150mb limits.
Apart from that, we can download the rootfs from another l3 endpoint, but we need to have the network configured before... with ip, dhcp, vlan options. The network there is also slow, so if we have a chance to reduce rootfs (we do not need the full dependencies) it would also help.

@cgwalters
Copy link
Member Author

Yeah, I don't think we're going to get below 150MB ever 😄

I think that leaves us with:

  1. spend a bit of time helping out/documenting the steps to create such a tool that could be maintained by others
  2. shipping a tool to generate it
  3. shipping something like this

In increasing order of commitment. I think 1. would be pretty easy and once done unlikely to break - we added the rootfs_url recently but it feels unlikely we do something like that again soon.

@dustymabe
Copy link
Member

dustymabe commented Nov 4, 2020

Is there an ISO image that exists that boots and allows the user to type a URL to an IPXE config as input and then executes it?

Would that solve the problem?

@cgwalters
Copy link
Member Author

Is there an ISO image that exists that boots and allows the user to type a URL to an IPXE config as input and then executes it?

Well I typed "ipxe ISO" into a search engine and hit http://boot.ipxe.org/ipxe.iso which...yeah wow it's just 1MB but it's not just about the URL, I think many cases will want to embed the networking configuration so installation can be fully automated, and I'm not sure that's supported by iPXE, though it clearly could be.

That's definitely one extreme of minimal for sure; one issue I have with it is that there's no (AFAIK) any mechanism to validate the integrity of the fetched content.

@bgilbert
Copy link
Contributor

bgilbert commented Nov 4, 2020

It shouldn't be too difficult for you to build a custom image for your use case. Basically, generate a custom ISO containing the PXE kernel and initrd, a bootloader specifying ignition.firstboot ignition.platform.id=metal coreos.inst.rootfs_url= kernel arguments, and an appended initrd with config.ign. You can specify network configuration via kernel arguments or by adding NetworkManager configs to the appended initrd.

@yuvalk
Copy link

yuvalk commented Nov 5, 2020

I think most of your advanced networking is possible via kargs except bonds. One thing I raised in the past on this is - wouldn't it suffice for most cases to not bond just during the install process, and configure the bond via e.g. MachineConfig/Ignition that would be applied on the target system after it's installed? The install phase is very "transient", it doesn't seem strictly necessary to me to have NIC redundancy there.

But I understand it's conceptually easier to just apply the same network configuration throughout in some cases; to do that would require us to add a mechanism to inject arbitrary network configuration into the initramfs, which we'd so far avoided needing.
(See a thread coreos/ignition#979 ).

one problem with MC is that if ignition need to be different per node (different IP for example) then it means it'll need a pool per node, cause MCO doesnt allow to append node specific settings ATM.

@jlebon
Copy link
Member

jlebon commented Nov 5, 2020

one problem with MC is that if ignition need to be different per node (different IP for example) then it means it'll need a pool per node, cause MCO doesnt allow to append node specific settings ATM.

For those cases, it's OK to edit the Ignition config directly (I'm assuming this is UPI). In the PXE-from-ISO workflow described in #661 (comment) you'd use kargs or NM configs. You can also extend this idea to e.g. have a script that inspects MAC addresses/chassis/serial numbers and determine the IP address to use from that if possible (and of course, you can do that also from the regular live ISO workflow by embedding such a script in the ISO Ignition config). That way you can keep the media generic.

@ohadlevy
Copy link

ohadlevy commented Nov 6, 2020

regarding using ipxe iso, I explored that initially, and while very appealing, there is a risk of mismatch in network drivers and I don't believe we will have a feature mapping between ipxe and rhel kernel sadly.

for experimenting, I actually created a small tool to extract the bootable artifices (kernel/initrd/ign/roofs) from an assisted installer iso https://github.com/ohadlevy/ai-ipxe

@yrobla
Copy link

yrobla commented Nov 6, 2020

I did the equivalent but with syslinux: https://github.com/redhat-ztp/ztp-iso-generator/blob/main/rhcos-iso/generate_rhcos_iso.sh
As kernel/initramfs are present on the mirror (https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/latest/ )
I guess that what we need is to have a supported small ISO built on this repo as well , just containing kernel and ramdisk and without rootfs. Is that something that can be produced and pushed as part of the repo? Or what is the supported workflow for producing that ? (syslinux, ipxe, etc...)

@bgilbert
Copy link
Contributor

bgilbert commented Nov 6, 2020

regarding using ipxe iso, I explored that initially, and while very appealing, there is a risk of mismatch in network drivers and I don't believe we will have a feature mapping between ipxe and rhel kernel sadly.

Could you expand on that? The CoreOS initrd checks for a mismatched kernel or a mismatched rootfs, and will fail if all three don't come from the same OS build. Every FCOS and RHCOS release includes all three artifacts, so there shouldn't be any reason to use a mismatched kernel.

I guess that what we need is to have a supported small ISO built on this repo as well , just containing kernel and ramdisk and without rootfs.

The suggestion is that you build this yourself, since you appear to have a specialized use case. This doesn't require you to rely on any internal or unstable APIs/ABIs, since you're essentially using the PXE artifacts in the supported way. You can use isolinux, GRUB, or some other bootloader as you prefer.

jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue Jan 7, 2021
In the case where a user is PXE booting (or booting a custom made ISO as
suggested in coreos/fedora-coreos-tracker#661)
we want users to be able to specify their NM configurations using a
secondary initrd which overlays the config in `/etc/NetworkManager/` as
a better alternative to using networking kargs.

In that case, we do still want to propagate those configs into the real
root. This avoids users having to also specify the same config again via
Ignition.

Related: https://discussion.fedoraproject.org/t/25833
Related: coreos/fedora-coreos-tracker#661
@jlebon
Copy link
Member

jlebon commented Jun 16, 2021

I'm going to temporarily lock this ticket because we have a potential path forward in coreos/coreos-installer#559 for this we're discussing and I don't want to spread out the discussion further. I'm not closing it yet because if we don't go with coreos/coreos-installer#559, we may need to circle back to some of the suggestions here, even if it's just "let's document the DIY path for this".

@coreos coreos locked and limited conversation to collaborators Jun 16, 2021
@jlebon
Copy link
Member

jlebon commented Jun 24, 2021

#868 (comment) was approved. Let's close this one out.

@jlebon jlebon closed this as completed Jun 24, 2021
@dustymabe dustymabe added the status/pending-upstream-release Fixed upstream. Waiting on an upstream component source code release. label Jun 25, 2021
@dustymabe dustymabe removed the status/pending-upstream-release Fixed upstream. Waiting on an upstream component source code release. label Jul 15, 2021
@coreos coreos unlocked this conversation Aug 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants