-
Notifications
You must be signed in to change notification settings - Fork 410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the infra object available for template rendering #943
Conversation
Hi @rgolangh. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
||
// infra holds the infrastructure details | ||
// TODO this makes platform redundant as everything is contained inside Infra.Status | ||
Infra configv1.Infrastructure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why a pointer? are the values changing in a differnet place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maintaining the pattern with other infra object and useful to differentiate between set/unset (not that it's needed now, but still)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So are we going to use that for setting value to it? Is that the generated code maybe? (help me to get that cause I'd like to understand it)
pkg/operator/render.go
Outdated
@@ -23,6 +23,7 @@ type renderConfig struct { | |||
APIServerURL string | |||
Images *RenderConfigImages | |||
KubeAPIServerServingCA string | |||
Infra configv1.Infrastructure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: struct spacing.
This might actually get flagged in our linter.
/ok-to-test |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'd just like to mention that the OpenStack and baremetal folks could really use this change (for basically the same reasons as @rgolangh is adding it for oVirt). |
Related-to: openshift/installer#1948 |
Are we ok with removing platform as this PR implies? Would like to just have this idea explicitly 👍 so we are clear going forward. cc: @runcom @cgwalters see: #943 (comment) |
@kikisdeliveryservice @runcom how do we continue? |
/retest |
|
||
// infra holds the infrastructure details | ||
// TODO this makes platform redundant as everything is contained inside Infra.Status | ||
Infra *configv1.Infrastructure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add the missing json:"infra"
tag
8d1cc80
to
0f2187b
Compare
The experimental OpenStack backend used to create an extra server running DNS and load balancer services that the cluster needed. OpenStack does not always come with DNSaaS or LBaaS so we had to provide the functionality the OpenShift cluster depends on (e.g. the etcd SRV records, the api-int records & load balancing, etc.). This approach is undesirable for two reasons: first, it adds an extra node that the other IPI platforms do not need. Second, this node is a single point of failure. The Baremetal platform has faced the same issues and they have solved them with a few virtual IP addresses managed by keepalived in combination with coredns static pod running on every node using the mDNS protocol to update records as new nodes are added or removed and a similar static pod haproxy to load balance the control plane internally. The VIPs are defined here in the installer and they use the PlatformStatus field to be passed to the necessary machine-config-operator fields: openshift/api#374 The Bare Metal IPI Networking Infrastructure document is broadly applicable here as well: https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md Notable differences in OpenStack: * We only use the API and DNS VIPs right now * Instead of Baremetal's Ingress VIP (which is attached to the OpenShift routers) our haproxy static pods balance the 80 & 443 pods to the worker nodes * We do not run coredns on the bootstrap node. Instead, bootstrap itself uses one of the masters for DNS. These differences are not fundamental to OpenStack and we will be looking at aligning more closely with the Baremetal provider in the future. There is also a great oportunity to share some of the configuration files and scripts here. This change needs several other pull requests: Keepalived plus the coredns & haproxy static pods in the MCO: openshift/machine-config-operator/pull/740 Passing the API and DNS VIPs through the installer: openshift#1998 Vendoring the OpenStack PlatformStatus changes in the MCO: openshift/machine-config-operator#978 Allowing to use PlatformStatus in the MCO templates: openshift/machine-config-operator#943 Co-authored-by: Emilio Garcia <egarcia@redhat.com> Co-authored-by: John Trowbridge <trown@redhat.com> Co-authored-by: Martin Andre <m.andre@redhat.com> Co-authored-by: Tomas Sedovic <tsedovic@redhat.com> Massive thanks to the Bare Metal and oVirt people!
looks like the only thing left here is to run |
I believe we have a FFE for #795 , and it depends on this patch. |
confirmed with @celebdor there is an active FFE covering this /hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kikisdeliveryservice, LorbusChris, rgolangh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test e2e-aws |
The experimental OpenStack backend used to create an extra server running DNS and load balancer services that the cluster needed. OpenStack does not always come with DNSaaS or LBaaS so we had to provide the functionality the OpenShift cluster depends on (e.g. the etcd SRV records, the api-int records & load balancing, etc.). This approach is undesirable for two reasons: first, it adds an extra node that the other IPI platforms do not need. Second, this node is a single point of failure. The Baremetal platform has faced the same issues and they have solved them with a few virtual IP addresses managed by keepalived in combination with coredns static pod running on every node using the mDNS protocol to update records as new nodes are added or removed and a similar static pod haproxy to load balance the control plane internally. The VIPs are defined here in the installer and they use the PlatformStatus field to be passed to the necessary machine-config-operator fields: openshift/api#374 The Bare Metal IPI Networking Infrastructure document is broadly applicable here as well: https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md Notable differences in OpenStack: * We only use the API and DNS VIPs right now * Instead of Baremetal's Ingress VIP (which is attached to the OpenShift routers) our haproxy static pods balance the 80 & 443 pods to the worker nodes * We do not run coredns on the bootstrap node. Instead, bootstrap itself uses one of the masters for DNS. These differences are not fundamental to OpenStack and we will be looking at aligning more closely with the Baremetal provider in the future. There is also a great oportunity to share some of the configuration files and scripts here. This change needs several other pull requests: Keepalived plus the coredns & haproxy static pods in the MCO: openshift/machine-config-operator/pull/740 Passing the API and DNS VIPs through the installer: openshift#1998 Vendoring the OpenStack PlatformStatus changes in the MCO: openshift/machine-config-operator#978 Allowing to use PlatformStatus in the MCO templates: openshift/machine-config-operator#943 Co-authored-by: Emilio Garcia <egarcia@redhat.com> Co-authored-by: John Trowbridge <trown@redhat.com> Co-authored-by: Martin Andre <m.andre@redhat.com> Co-authored-by: Tomas Sedovic <tsedovic@redhat.com> Massive thanks to the Bare Metal and oVirt people!
Adds pods to master and worker nodes as appropriate Updates haproxy container to use openshift/router-haproxy image instead of docker.io/library/haproxy Also adds liveness tests for the coredns,mdns-publisher, haproxy and keepalived static pods, changes worker node /etc/resolv.conf to point to node's IP instead of 127.0.0.1 and fix the bug generating haproxy cfg file Depends-On: openshift#943 Depends-On: openshift#984
Adds pods to master and worker nodes as appropriate Updates haproxy container to use openshift/router-haproxy image instead of docker.io/library/haproxy Also adds liveness tests for the coredns,mdns-publisher, haproxy and keepalived static pods, changes worker node /etc/resolv.conf to point to node's IP instead of 127.0.0.1 and fix the bug generating haproxy cfg file Depends-On: openshift#943 Depends-On: openshift#984 Conflicts: templates/common/baremetal/files/baremetal-coredns-corefile.yaml templates/common/baremetal/files/baremetal-coredns.yaml templates/common/baremetal/files/baremetal-mdns-publisher.yaml templates/master/00-master/baremetal/files/baremetal-haproxy-haproxy.yaml templates/master/00-master/baremetal/files/baremetal-haproxy.yaml templates/master/00-master/baremetal/files/baremetal-keepalived-keepalived.yaml templates/master/00-master/baremetal/files/baremetal-mdns-config.yaml templates/master/00-master/baremetal/files/dhcp-dhclient-conf.yaml templates/worker/00-worker/baremetal/files/baremetal-mdns-config.yaml templates/worker/00-worker/baremetal/files/dhcp-dhclient-conf.yaml
Adds pods to master and worker nodes as appropriate Updates haproxy container to use openshift/router-haproxy image instead of docker.io/library/haproxy Also adds liveness tests for the coredns,mdns-publisher, haproxy and keepalived static pods, changes worker node /etc/resolv.conf to point to node's IP instead of 127.0.0.1 and fix the bug generating haproxy cfg file Due to the fact that both use the same image, there was a bit of confusion here. We want keepalived to track OCP Router 1936 and we want API LB Haproxy pod to have health checked at 50936, which is where we configure haproxy to expose health at. Depends-On: openshift#943 Depends-On: openshift#984
The experimental OpenStack backend used to create an extra server running DNS and load balancer services that the cluster needed. OpenStack does not always come with DNSaaS or LBaaS so we had to provide the functionality the OpenShift cluster depends on (e.g. the etcd SRV records, the api-int records & load balancing, etc.). This approach is undesirable for two reasons: first, it adds an extra node that the other IPI platforms do not need. Second, this node is a single point of failure. The Baremetal platform has faced the same issues and they have solved them with a few virtual IP addresses managed by keepalived in combination with coredns static pod running on every node using the mDNS protocol to update records as new nodes are added or removed and a similar static pod haproxy to load balance the control plane internally. The VIPs are defined here in the installer and they use the PlatformStatus field to be passed to the necessary machine-config-operator fields: openshift/api#374 The Bare Metal IPI Networking Infrastructure document is broadly applicable here as well: https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md Notable differences in OpenStack: * We only use the API and DNS VIPs right now * Instead of Baremetal's Ingress VIP (which is attached to the OpenShift routers) our haproxy static pods balance the 80 & 443 pods to the worker nodes * We do not run coredns on the bootstrap node. Instead, bootstrap itself uses one of the masters for DNS. These differences are not fundamental to OpenStack and we will be looking at aligning more closely with the Baremetal provider in the future. There is also a great oportunity to share some of the configuration files and scripts here. This change needs several other pull requests: Keepalived plus the coredns & haproxy static pods in the MCO: openshift/machine-config-operator/pull/740 Passing the API and DNS VIPs through the installer: openshift#1998 Vendoring the OpenStack PlatformStatus changes in the MCO: openshift/machine-config-operator#978 Allowing to use PlatformStatus in the MCO templates: openshift/machine-config-operator#943 Co-authored-by: Emilio Garcia <egarcia@redhat.com> Co-authored-by: John Trowbridge <trown@redhat.com> Co-authored-by: Martin Andre <m.andre@redhat.com> Co-authored-by: Tomas Sedovic <tsedovic@redhat.com> Massive thanks to the Bare Metal and oVirt people!
Adds pods to master and worker nodes as appropriate Updates haproxy container to use openshift/router-haproxy image instead of docker.io/library/haproxy Also adds liveness tests for the coredns,mdns-publisher, haproxy and keepalived static pods, changes worker node /etc/resolv.conf to point to node's IP instead of 127.0.0.1 and fix the bug generating haproxy cfg file Due to the fact that both use the same image, there was a bit of confusion here. We want keepalived to track OCP Router 1936 and we want API LB Haproxy pod to have health checked at 50936, which is where we configure haproxy to expose health at. Depends-On: openshift#943 Depends-On: openshift#984
The experimental OpenStack backend used to create an extra server running DNS and load balancer services that the cluster needed. OpenStack does not always come with DNSaaS or LBaaS so we had to provide the functionality the OpenShift cluster depends on (e.g. the etcd SRV records, the api-int records & load balancing, etc.). This approach is undesirable for two reasons: first, it adds an extra node that the other IPI platforms do not need. Second, this node is a single point of failure. The Baremetal platform has faced the same issues and they have solved them with a few virtual IP addresses managed by keepalived in combination with coredns static pod running on every node using the mDNS protocol to update records as new nodes are added or removed and a similar static pod haproxy to load balance the control plane internally. The VIPs are defined here in the installer and they use the PlatformStatus field to be passed to the necessary machine-config-operator fields: openshift/api#374 The Bare Metal IPI Networking Infrastructure document is broadly applicable here as well: https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md Notable differences in OpenStack: * We only use the API and DNS VIPs right now * Instead of Baremetal's Ingress VIP (which is attached to the OpenShift routers) our haproxy static pods balance the 80 & 443 pods to the worker nodes * We do not run coredns on the bootstrap node. Instead, bootstrap itself uses one of the masters for DNS. These differences are not fundamental to OpenStack and we will be looking at aligning more closely with the Baremetal provider in the future. There is also a great oportunity to share some of the configuration files and scripts here. This change needs several other pull requests: Keepalived plus the coredns & haproxy static pods in the MCO: openshift/machine-config-operator/pull/740 Passing the API and DNS VIPs through the installer: openshift#1998 Vendoring the OpenStack PlatformStatus changes in the MCO: openshift/machine-config-operator#978 Allowing to use PlatformStatus in the MCO templates: openshift/machine-config-operator#943 Co-authored-by: Emilio Garcia <egarcia@redhat.com> Co-authored-by: John Trowbridge <trown@redhat.com> Co-authored-by: Martin Andre <m.andre@redhat.com> Co-authored-by: Tomas Sedovic <tsedovic@redhat.com> Massive thanks to the Bare Metal and oVirt people!
oVirt and possibly other platforms needs access the InfrastructureStatus
to render properly config files with stuff like API VIP, DNS VIP etc.
By embedding the infra object its available for rendering in
tremplates - for example a coredns config file like this:
How to verify it
TODO will create a test
Fixes: #812
Required-by: #795