Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the infra object available for template rendering #943

Merged
merged 1 commit into from
Jul 23, 2019

Conversation

rgolangh
Copy link
Contributor

@rgolangh rgolangh commented Jul 10, 2019

oVirt and possibly other platforms needs access the InfrastructureStatus
to render properly config files with stuff like API VIP, DNS VIP etc.

By embedding the infra object its available for rendering in
tremplates - for example a coredns config file like this:

{{ ControllerConfig.Infra.Status.PlatformStatus.Ovirt.DnsVIP }} api-int.{{.EtcdDiscoveryDomain}} api.{{.EtcdDiscoveryDomain}} ns1.{{.EtcdDiscoveryDomain}}

How to verify it
TODO will create a test

Fixes: #812
Required-by: #795

@openshift-ci-robot openshift-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Jul 10, 2019
@openshift-ci-robot
Copy link
Contributor

Hi @rgolangh. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 10, 2019

// infra holds the infrastructure details
// TODO this makes platform redundant as everything is contained inside Infra.Status
Infra configv1.Infrastructure
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pointer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a pointer? are the values changing in a differnet place?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maintaining the pattern with other infra object and useful to differentiate between set/unset (not that it's needed now, but still)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So are we going to use that for setting value to it? Is that the generated code maybe? (help me to get that cause I'd like to understand it)

@@ -23,6 +23,7 @@ type renderConfig struct {
APIServerURL string
Images *RenderConfigImages
KubeAPIServerServingCA string
Infra configv1.Infrastructure
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: struct spacing.

This might actually get flagged in our linter.

@cgwalters
Copy link
Member

/ok-to-test
/meow

@openshift-ci-robot
Copy link
Contributor

@cgwalters: cat image

In response to this:

/ok-to-test
/meow

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 10, 2019
@tomassedovic
Copy link
Contributor

I'd just like to mention that the OpenStack and baremetal folks could really use this change (for basically the same reasons as @rgolangh is adding it for oVirt).

@kikisdeliveryservice
Copy link
Contributor

kikisdeliveryservice commented Jul 11, 2019

Related-to: openshift/installer#1948

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 11, 2019
@kikisdeliveryservice
Copy link
Contributor

Are we ok with removing platform as this PR implies? Would like to just have this idea explicitly 👍 so we are clear going forward.

cc: @runcom @cgwalters

see: #943 (comment)

@kikisdeliveryservice kikisdeliveryservice removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 11, 2019
@rgolangh
Copy link
Contributor Author

@kikisdeliveryservice @runcom how do we continue?

@rgolangh
Copy link
Contributor Author

/retest


// infra holds the infrastructure details
// TODO this makes platform redundant as everything is contained inside Infra.Status
Infra *configv1.Infrastructure
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add the missing json:"infra" tag

@rgolangh rgolangh force-pushed the issue-812 branch 2 times, most recently from 8d1cc80 to 0f2187b Compare July 18, 2019 10:46
tomassedovic pushed a commit to tomassedovic/openshift-installer that referenced this pull request Jul 18, 2019
The experimental OpenStack backend used to create an extra server
running DNS and load balancer services that the cluster needed.
OpenStack does not always come with DNSaaS or LBaaS so we had to provide
the functionality the OpenShift cluster depends on (e.g. the etcd SRV
records, the api-int records & load balancing, etc.).

This approach is undesirable for two reasons: first, it adds an extra
node that the other IPI platforms do not need. Second, this node is a
single point of failure.

The Baremetal platform has faced the same issues and they have solved
them with a few virtual IP addresses managed by keepalived in
combination with coredns static pod running on every node using the mDNS
protocol to update records as new nodes are added or removed and a
similar static pod haproxy to load balance the control plane internally.

The VIPs are defined here in the installer and they use the
PlatformStatus field to be passed to the necessary
machine-config-operator fields:

openshift/api#374

The Bare Metal IPI Networking Infrastructure document is broadly
applicable here as well:

https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md

Notable differences in OpenStack:

* We only use the API and DNS VIPs right now
* Instead of Baremetal's Ingress VIP (which is attached to the OpenShift
  routers) our haproxy static pods balance the 80 & 443 pods to the
  worker nodes
* We do not run coredns on the bootstrap node. Instead, bootstrap itself
  uses one of the masters for DNS.

These differences are not fundamental to OpenStack and we will be
looking at aligning more closely with the Baremetal provider in the
future.

There is also a great oportunity to share some of the configuration
files and scripts here.

This change needs several other pull requests:

Keepalived plus the coredns & haproxy static pods in the MCO:
openshift/machine-config-operator/pull/740

Passing the API and DNS VIPs through the installer:
openshift#1998

Vendoring the OpenStack PlatformStatus changes in the MCO:
openshift/machine-config-operator#978

Allowing to use PlatformStatus in the MCO templates:
openshift/machine-config-operator#943

Co-authored-by: Emilio Garcia <egarcia@redhat.com>
Co-authored-by: John Trowbridge <trown@redhat.com>
Co-authored-by: Martin Andre <m.andre@redhat.com>
Co-authored-by: Tomas Sedovic <tsedovic@redhat.com>

Massive thanks to the Bare Metal and oVirt people!
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 18, 2019
@LorbusChris
Copy link
Member

looks like the only thing left here is to run make update and commit :)

@bcrochet
Copy link
Member

adding a hold so this doesn't merge without clearance via exception.

I believe we have a FFE for #795 , and it depends on this patch.

@kikisdeliveryservice
Copy link
Contributor

kikisdeliveryservice commented Jul 23, 2019

confirmed with @celebdor there is an active FFE covering this

/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 23, 2019
Copy link
Contributor

@kikisdeliveryservice kikisdeliveryservice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 23, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kikisdeliveryservice, LorbusChris, rgolangh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [LorbusChris,kikisdeliveryservice]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bcrochet
Copy link
Member

/test e2e-aws

@openshift-merge-robot openshift-merge-robot merged commit 5f14d31 into openshift:master Jul 23, 2019
@rgolangh rgolangh deleted the issue-812 branch July 24, 2019 04:59
tomassedovic pushed a commit to tomassedovic/openshift-installer that referenced this pull request Jul 24, 2019
The experimental OpenStack backend used to create an extra server
running DNS and load balancer services that the cluster needed.
OpenStack does not always come with DNSaaS or LBaaS so we had to provide
the functionality the OpenShift cluster depends on (e.g. the etcd SRV
records, the api-int records & load balancing, etc.).

This approach is undesirable for two reasons: first, it adds an extra
node that the other IPI platforms do not need. Second, this node is a
single point of failure.

The Baremetal platform has faced the same issues and they have solved
them with a few virtual IP addresses managed by keepalived in
combination with coredns static pod running on every node using the mDNS
protocol to update records as new nodes are added or removed and a
similar static pod haproxy to load balance the control plane internally.

The VIPs are defined here in the installer and they use the
PlatformStatus field to be passed to the necessary
machine-config-operator fields:

openshift/api#374

The Bare Metal IPI Networking Infrastructure document is broadly
applicable here as well:

https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md

Notable differences in OpenStack:

* We only use the API and DNS VIPs right now
* Instead of Baremetal's Ingress VIP (which is attached to the OpenShift
  routers) our haproxy static pods balance the 80 & 443 pods to the
  worker nodes
* We do not run coredns on the bootstrap node. Instead, bootstrap itself
  uses one of the masters for DNS.

These differences are not fundamental to OpenStack and we will be
looking at aligning more closely with the Baremetal provider in the
future.

There is also a great oportunity to share some of the configuration
files and scripts here.

This change needs several other pull requests:

Keepalived plus the coredns & haproxy static pods in the MCO:
openshift/machine-config-operator/pull/740

Passing the API and DNS VIPs through the installer:
openshift#1998

Vendoring the OpenStack PlatformStatus changes in the MCO:
openshift/machine-config-operator#978

Allowing to use PlatformStatus in the MCO templates:
openshift/machine-config-operator#943

Co-authored-by: Emilio Garcia <egarcia@redhat.com>
Co-authored-by: John Trowbridge <trown@redhat.com>
Co-authored-by: Martin Andre <m.andre@redhat.com>
Co-authored-by: Tomas Sedovic <tsedovic@redhat.com>

Massive thanks to the Bare Metal and oVirt people!
bcrochet added a commit to bcrochet/machine-config-operator that referenced this pull request Jul 24, 2019
Adds pods to master and worker nodes as appropriate

Updates haproxy container to use openshift/router-haproxy image instead of
docker.io/library/haproxy

Also adds liveness tests for the coredns,mdns-publisher,
haproxy and keepalived static pods, changes worker node
/etc/resolv.conf to point to node's IP instead of 127.0.0.1 and
fix the bug generating haproxy cfg file

Depends-On: openshift#943
Depends-On: openshift#984
celebdor pushed a commit to celebdor/machine-config-operator that referenced this pull request Jul 25, 2019
Adds pods to master and worker nodes as appropriate

Updates haproxy container to use openshift/router-haproxy image instead of
docker.io/library/haproxy

Also adds liveness tests for the coredns,mdns-publisher,
haproxy and keepalived static pods, changes worker node
/etc/resolv.conf to point to node's IP instead of 127.0.0.1 and
fix the bug generating haproxy cfg file

Depends-On: openshift#943
Depends-On: openshift#984

Conflicts:
	templates/common/baremetal/files/baremetal-coredns-corefile.yaml
	templates/common/baremetal/files/baremetal-coredns.yaml
	templates/common/baremetal/files/baremetal-mdns-publisher.yaml
	templates/master/00-master/baremetal/files/baremetal-haproxy-haproxy.yaml
	templates/master/00-master/baremetal/files/baremetal-haproxy.yaml
	templates/master/00-master/baremetal/files/baremetal-keepalived-keepalived.yaml
	templates/master/00-master/baremetal/files/baremetal-mdns-config.yaml
	templates/master/00-master/baremetal/files/dhcp-dhclient-conf.yaml
	templates/worker/00-worker/baremetal/files/baremetal-mdns-config.yaml
	templates/worker/00-worker/baremetal/files/dhcp-dhclient-conf.yaml
bcrochet added a commit to bcrochet/machine-config-operator that referenced this pull request Jul 26, 2019
Adds pods to master and worker nodes as appropriate

Updates haproxy container to use openshift/router-haproxy image instead of
docker.io/library/haproxy

Also adds liveness tests for the coredns,mdns-publisher,
haproxy and keepalived static pods, changes worker node
/etc/resolv.conf to point to node's IP instead of 127.0.0.1 and
fix the bug generating haproxy cfg file

Due to the fact that both use the same image, there was a bit of
confusion here. We want keepalived to track OCP Router 1936 and we want
API LB Haproxy pod to have health checked at 50936, which is where we
configure haproxy to expose health at.

Depends-On: openshift#943
Depends-On: openshift#984
iamemilio pushed a commit to iamemilio/installer that referenced this pull request Jul 26, 2019
The experimental OpenStack backend used to create an extra server
running DNS and load balancer services that the cluster needed.
OpenStack does not always come with DNSaaS or LBaaS so we had to provide
the functionality the OpenShift cluster depends on (e.g. the etcd SRV
records, the api-int records & load balancing, etc.).

This approach is undesirable for two reasons: first, it adds an extra
node that the other IPI platforms do not need. Second, this node is a
single point of failure.

The Baremetal platform has faced the same issues and they have solved
them with a few virtual IP addresses managed by keepalived in
combination with coredns static pod running on every node using the mDNS
protocol to update records as new nodes are added or removed and a
similar static pod haproxy to load balance the control plane internally.

The VIPs are defined here in the installer and they use the
PlatformStatus field to be passed to the necessary
machine-config-operator fields:

openshift/api#374

The Bare Metal IPI Networking Infrastructure document is broadly
applicable here as well:

https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md

Notable differences in OpenStack:

* We only use the API and DNS VIPs right now
* Instead of Baremetal's Ingress VIP (which is attached to the OpenShift
  routers) our haproxy static pods balance the 80 & 443 pods to the
  worker nodes
* We do not run coredns on the bootstrap node. Instead, bootstrap itself
  uses one of the masters for DNS.

These differences are not fundamental to OpenStack and we will be
looking at aligning more closely with the Baremetal provider in the
future.

There is also a great oportunity to share some of the configuration
files and scripts here.

This change needs several other pull requests:

Keepalived plus the coredns & haproxy static pods in the MCO:
openshift/machine-config-operator/pull/740

Passing the API and DNS VIPs through the installer:
openshift#1998

Vendoring the OpenStack PlatformStatus changes in the MCO:
openshift/machine-config-operator#978

Allowing to use PlatformStatus in the MCO templates:
openshift/machine-config-operator#943

Co-authored-by: Emilio Garcia <egarcia@redhat.com>
Co-authored-by: John Trowbridge <trown@redhat.com>
Co-authored-by: Martin Andre <m.andre@redhat.com>
Co-authored-by: Tomas Sedovic <tsedovic@redhat.com>

Massive thanks to the Bare Metal and oVirt people!
celebdor pushed a commit to bcrochet/machine-config-operator that referenced this pull request Jul 26, 2019
Adds pods to master and worker nodes as appropriate

Updates haproxy container to use openshift/router-haproxy image instead of
docker.io/library/haproxy

Also adds liveness tests for the coredns,mdns-publisher,
haproxy and keepalived static pods, changes worker node
/etc/resolv.conf to point to node's IP instead of 127.0.0.1 and
fix the bug generating haproxy cfg file

Due to the fact that both use the same image, there was a bit of
confusion here. We want keepalived to track OCP Router 1936 and we want
API LB Haproxy pod to have health checked at 50936, which is where we
configure haproxy to expose health at.

Depends-On: openshift#943
Depends-On: openshift#984
mandre pushed a commit to mandre/installer that referenced this pull request Jul 26, 2019
The experimental OpenStack backend used to create an extra server
running DNS and load balancer services that the cluster needed.
OpenStack does not always come with DNSaaS or LBaaS so we had to provide
the functionality the OpenShift cluster depends on (e.g. the etcd SRV
records, the api-int records & load balancing, etc.).

This approach is undesirable for two reasons: first, it adds an extra
node that the other IPI platforms do not need. Second, this node is a
single point of failure.

The Baremetal platform has faced the same issues and they have solved
them with a few virtual IP addresses managed by keepalived in
combination with coredns static pod running on every node using the mDNS
protocol to update records as new nodes are added or removed and a
similar static pod haproxy to load balance the control plane internally.

The VIPs are defined here in the installer and they use the
PlatformStatus field to be passed to the necessary
machine-config-operator fields:

openshift/api#374

The Bare Metal IPI Networking Infrastructure document is broadly
applicable here as well:

https://github.com/openshift/installer/blob/master/docs/design/baremetal/networking-infrastructure.md

Notable differences in OpenStack:

* We only use the API and DNS VIPs right now
* Instead of Baremetal's Ingress VIP (which is attached to the OpenShift
  routers) our haproxy static pods balance the 80 & 443 pods to the
  worker nodes
* We do not run coredns on the bootstrap node. Instead, bootstrap itself
  uses one of the masters for DNS.

These differences are not fundamental to OpenStack and we will be
looking at aligning more closely with the Baremetal provider in the
future.

There is also a great oportunity to share some of the configuration
files and scripts here.

This change needs several other pull requests:

Keepalived plus the coredns & haproxy static pods in the MCO:
openshift/machine-config-operator/pull/740

Passing the API and DNS VIPs through the installer:
openshift#1998

Vendoring the OpenStack PlatformStatus changes in the MCO:
openshift/machine-config-operator#978

Allowing to use PlatformStatus in the MCO templates:
openshift/machine-config-operator#943

Co-authored-by: Emilio Garcia <egarcia@redhat.com>
Co-authored-by: John Trowbridge <trown@redhat.com>
Co-authored-by: Martin Andre <m.andre@redhat.com>
Co-authored-by: Tomas Sedovic <tsedovic@redhat.com>

Massive thanks to the Bare Metal and oVirt people!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bm/ovirt: pass API_VIP/DNS_VIP from cluster config/installer and render templates with it
10 participants