-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drop deprecated mao config #518
Conversation
/test e2e-aws |
/lgtm |
/test e2e-aws |
Region string `json:"region"` | ||
Replicas int `json:"replicas"` | ||
TargetNamespace string `json:"targetNamespace"` | ||
Provider string `json:"provider"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need these two? Can MAO install them on its own? It already has namespace opinions here. And the provider information is already included in the worker machine-set:
$ openshift-install --dir testing create manifests
$ cat testing/tectonic/99_openshift-cluster-api_worker-machineset.yaml
apiVersion: cluster.k8s.io/v1alpha1
kind: MachineSet
metadata:
...
spec:
replicas: 1
selector:
...
template:
metadata:
...
spec:
providerConfig:
value:
apiVersion: libvirtproviderconfig.k8s.io/v1alpha1
kind: LibvirtMachineProviderConfig
...
versions:
...
And also in the master machine entry:
$ cat testing/tectonic/99_openshift-cluster-api_master-machines.yaml
kind: List
apiVersion: v1
metadata:
resourceVersion: ""
selfLink: ""
items:
- apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
...
spec:
providerConfig:
value:
apiVersion: libvirtproviderconfig.k8s.io/v1alpha1
kind: LibvirtMachineProviderConfig
...
versions:
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @wking we can possibly drop the namespace, the provider is needed so the operator knows which provider controller to run to react to this objects with the right implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... the provider is needed so the operator knows which provider controller to run....
Don't both .spec.providerConfig.value.apiVersion
and .spec.providerConfig.value.kind
contain the provider information you need (see my examples above, where they are libvirtproviderconfig.k8s.io/v1alpha1
and LibvirtMachineProviderConfig
respectively)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how you mean here . The operator needs to know the provider so it runs a different provider controller depending on it. Then that controller watches machines and understand only its specific kind. There's no global controller which decode the kind and choose an implementation based on it at runtime; currently the cluster api requires to choose a provider controller ahead of time. Let me know if it does not make sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The operator needs to know the provider so it runs a different provider controller depending on it. Then that controller watches machines and understand only its specific kind. There's no global controller which decode the kind and choose an implementation based on it at runtime; currently the cluster api requires to choose a provider controller ahead of time.
Ah, maybe we should grow this ;). As it stands, can't we do the Go equivalent of:
oc get -o jsonpath --template '{.items[0].spec.template.spec.providerConfig.value.kind}' -n openshift-cluster-api machinesets
from the operator to figure out which controller to launch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking do you want the operator to deploy controllers based on the machinesets that were pushed to api? :P I don't think that's a good idea.
Yeah that's what I want. But I'm not very familiar with this space, maybe it's crazy. Is there a big difference between the operator choosing based on a machineAPIOperatorConfig
and the operator choosing based on the worker MachineSet
? They seem similar to me, except the latter would be more DRY. And it also sets us up for "this is mostly an AWS cluster, but toss some workers up on GCP" at some point in the distant future.
@enxebre almost all operators are using the kube-system/cluster-config-v1 configmap's key install-config to discover configuration like the platform.
But weren't we planning on dropping that? I'll see if I can find a reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And it also sets us up for "this is mostly an AWS cluster, but toss some workers up on GCP"
I think we are getting too ahead of ourselves :P. Our machines only talk on internal load balancers, this will not work. also none of the other components are multi cloud aware and i don't think they should be... due to simpler decision boundaries..
But weren't we planning on dropping that? I'll see if I can find a reference.
We might drop it in favor of something else but there will always be some discovery available for operator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @wking @abhinavdahiya There's too many moving pieces yet (downstream and upstream) that makes me hesitant on thinking about enabling a discovery which requires to run at least one object before the controller which watches it. I can't see any effective immediate benefit whereas it would get as to a path with loads of open questions and a bunch of new cases that we'll have to contemplate and which are difficult to reason about at this point (what happen if the operator see two objects for different clouds? use the first one? do they map to the same cluster object? or do we support simultaneous clouds? if so that's a whole different story, etc, etc... ). The cluster api itself might support a more dynamic system for running different controller impl based on different machine classes eventually or we could think of it when the time comes but it's just too soon. I think for now using https://github.com/openshift/installer/blob/master/pkg/types/installconfig.go#L34 make sense. I'll update the operator to consume the installconfig so totally disconnect it from this configmap, then I'll update this PR to drop it all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll update the operator to consume the installconfig...
I dunno how future-proof this will be, but I'm completely fine with it as a short-term solution. And who knows, maybe it will last longer than I think ;).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold Can we drop the namespace? |
Dropping all config now. See openshift/machine-api-operator@35a6e3a for reference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, enxebre, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
need to drop |
/hold cancel |
This config is not needed for the mao anymore since:
1 - We moved to CRDs
openshift/machine-api-operator#100
openshift/cluster-api-provider-aws#81
openshift/cluster-api-provider-libvirt#73
2 - machineSets objects are being generate on the installer now
openshift/machine-api-operator#101 so relevant config is no needed in the operator