Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendering ignition config when access to MCS is not available #1690

Closed
kirankt opened this issue Apr 27, 2020 · 13 comments
Closed

Rendering ignition config when access to MCS is not available #1690

kirankt opened this issue Apr 27, 2020 · 13 comments

Comments

@kirankt
Copy link

kirankt commented Apr 27, 2020

Some context: We at the OpenShift Baremetal IPI installer team are facing a situation where we need to pass full ignition to the hosts prior to their deployment. We do this to go around the chicken-and-egg issue in situations where more advanced network configurations such as interfaces on VLANS need to be configured before the node is deployed. The code for passing this to masters in the installer merged recently:
openshift/installer#3276

The issue arises after the cluster is fully deployed and when new nodes have to be deployed. We utilize the Machine object (https://github.com/openshift/cluster-api-provider-baremetal/) to provide access to userData via a secret (originally stub ignition). We tried to create a new secret with full ignition by accessing the MCS from within the pod, but access to it is denied on IPv4 network (by design). Our solution seemed to have worked initially because we had tested with IPv6 and were successful in accessing the MCS. But I think the general consensus is that the pod networks will not be able to access MCS' URL to fetch the ignition.

Please see:
openshift/ovn-kubernetes#144

With that out of the way, is there an API to render the ignition config from within a pod (via machineconfigpool object, perhaps)? In my brief perusing of the code, I noticed that the MCS does this but it seems like those parts of the API is inaccessible, most probably by design. If not, would the team be amenable to something similar to this attempt? -Or- if there is a better way to approach this?

https://github.com/kirankt/machine-config-operator/blob/ignition-render-api/pkg/server/render.go

@kirankt
Copy link
Author

kirankt commented Apr 27, 2020

/cc @stbenjam, @celebdor

@cgwalters
Copy link
Member

Hmm. The obvious thing is to just expose it as a non-hostnetwork service but access to that service needs to be gated by something. Maybe we could make it require a secret that's only in the machine-config-operator namespace, and so anything that wants to render Ignition has to have been granted access to it.

@hardys
Copy link
Contributor

hardys commented Jun 15, 2020

Some more context around this requirement in https://bugzilla.redhat.com/show_bug.cgi?id=1824331 - basically we need some way to break the chicken/egg cycle in the case where some network configuration is needed before a host is able to reach the MCS.

We discussed some ideas in coreos/ignition#979 but the solutions there are mostly specific to ISO deployments where the rendered config can be embedded into the disk image.

We need a solution that works for PXE boot, and specifically for baremetal IPI which currently uses the openstack RHCOS image and is capable of reading a config drive partition, where size limits are large enough that we could provide the fully rendered config vs just the pointer with an append URL.

Users may customize the pointer ignition generated by the installer, so we've discussed the idea of inlining the rendered config with the data URL scheme which AFAICS should work with ignition, but if we go down that route we need some way to retrieve the rendered config (or even better the pointer config with the rendered config already inlined).

@cgwalters
Copy link
Member

One thing to bear in mind here is this direction conflicts with the push in #1792 to have the MCO actively manage the pointer config provided to nodes.

I think we can resolve that conflict but it will need some design.

@hardys
Copy link
Contributor

hardys commented Jun 15, 2020

One thing to bear in mind here is this direction conflicts with the push in #1792 to have the MCO actively manage the pointer config provided to nodes.

I think we can resolve that conflict but it will need some design.

Can you expand on the concern here? I was thinking we could add some interface that enables download from the MCS of the pointer config, but with the "inner" config also inlined via a data URL, in which case it would be compatible with the above approach (we'd just have to ensure we always download the special inlined-config from the MCS)?

@cgwalters
Copy link
Member

Just to xref here, a while ago openshift-ansible switched over to using the Kube API to fetch MachineConfig rather than accessing the raw Ignition: openshift/openshift-ansible#11614

And it turns out the Windows node effort is also trying to fetch Ignition, though they just said they're going to also switch over to what openshift-ansible is doing.

@ravisantoshgudimetla
Copy link
Contributor

We're facing the same issue in the Windows Containers team. We would like to access MCS endpoint while bootstrapping the Windows node as we couldn't find the bootstrap kubeconfig from worker machineconfig object in the api. In the past we resolved it by accessing MCS endpoint from the Windows host directly and configuring the Windows host to be a worker node. To be clear, this is what helped us to get the bootstrap kubeconfig that we want. If there is an alternate way which just uses api, we're more than happy to change.

@hardys
Copy link
Contributor

hardys commented Jun 17, 2020

Just to xref here, a while ago openshift-ansible switched over to using the Kube API to fetch MachineConfig rather than accessing the raw Ignition: openshift/openshift-ansible#11614

And it turns out the Windows node effort is also trying to fetch Ignition, though they just said they're going to also switch over to what openshift-ansible is doing.

We tried doing something similar but ended up reverting it ref openshift/installer#3589 and openshift/cluster-api-provider-baremetal#70

The problem is that if a user injects some additional configuration via the create ignition-configs interface to openshift-install, that ends up in the pointer ignition config, which is ignored in this approach - that's why I'm describing a new interface above that enables maintaining both the potentially-customized pointer config and the rendered config inlined.

@hardys
Copy link
Contributor

hardys commented Jun 19, 2020

Another problem is how to identify which MachineConfig to use for a given machine - in the openshift-ansible referenced the MachineConfigPool is hard-coded as "worker" - what if there are multiple machinesets?

In the cluster-api-provider-baremetal we need to be able to derive which MachineConfig to use for a specific Machine object, would we use the role and expect a unique role per MachineSet and MachineConfigPool?

@kirankt
Copy link
Author

kirankt commented Jun 29, 2020

Since openshift/cluster-api-provider-baremetal (CAPBM) is an installer component that persists on day 2, we will need to access this from within the cluster for Day 2 operations. It was mentioned somewhere that we should try to deploy CAPBM as a privileged pod to see if we can access the MCS from the pod network. I have not tried this but do not see how this will work given that the worker node's iptables rules explicitly blocks outgoing traffic to the MCS.

If we are allowed to render the ignition from within a pod, it will solve our issues.

@cgwalters
Copy link
Member

We could make the rendered config a secret too. Though now with this of course we need to deal with spec2/spec3 which today the MCS does dynamically. Could we just assume spec 3 or do you need the ability to get spec 2 for ≤ 4.5 nodes?

@hardys
Copy link
Contributor

hardys commented Sep 9, 2020

We could make the rendered config a secret too. Though now with this of course we need to deal with spec2/spec3 which today the MCS does dynamically. Could we just assume spec 3 or do you need the ability to get spec 2 for ≤ 4.5 nodes?

I think we can assume spec 3 as currently deployment isn't possible in the scenarios targetted by this work (without workarounds), and it's only really a day-1 deploy-time issue.

@kirankt it's been suggested that we can track further discussion via openshift/enhancements#467 so perhaps you'd like to close this issue out and we'll capture remaining discussion there?

@kirankt
Copy link
Author

kirankt commented Sep 9, 2020

We could make the rendered config a secret too. Though now with this of course we need to deal with spec2/spec3 which today the MCS does dynamically. Could we just assume spec 3 or do you need the ability to get spec 2 for ≤ 4.5 nodes?

I think we can assume spec 3 as currently deployment isn't possible in the scenarios targetted by this work (without workarounds), and it's only really a day-1 deploy-time issue.

@kirankt it's been suggested that we can track further discussion via openshift/enhancements#467 so perhaps you'd like to close this issue out and we'll capture remaining discussion there?

Sounds good, @hardys . Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants