-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binding network bind_address
is returning a VIP with CIDR/32 if it is alphabetically first comparing to the interface address
#1556
Comments
For context: a Virtual IP is a technique where a fixed IP address is moved from one node to another when failing over. Arguably, the charm needs to know both:
Aside: other uses of /32 addresses in clouds I don't think that My take is that that charm must gain some logic or configuration to support complex deployments like those outlined in the OP. Specifically, if the current VM or pod is bound to multiple addresses, only the charm can make the determination which of the addresses would be used for what: {talking to the workload, publishing on the peer relation, publishing on normal relations}. |
Thanks @dimaqq. Just to add to that, here's the API you can use to list/filter all addresses (untested, but I think this is right): interfaces = self.model.get_binding(PEER).network.interfaces
# whatever filtering here:
ip = next((interface.address for interface in interfaces if _is_valid(interface.address)), None) |
Thank you both for your input. The problem here is that the virtual IP is not coming from the PostgreSQL service but from HAProxy that is running at the same machine. Being more specific, the setup includes installing MAAS and PostgreSQL at the same machine but also HAProxy sitting in front of MAAS with a VIP functionality offered by Keepalived. But let's generalize a bit. A machine charm can be collocated with other machine charms on a machine. So the VIP can come from any other charm or even something external to the charm world. Since picking always the first IP address with if self.interfaces:
if len(self.interfaces) == 1:
return self.interfaces[0].address
else:
for iface in self.interfaces:
if iface.cidr != f"{iface.address}/32":
return self.iface.address |
Similar to juju/juju#18297, operator should try to exclude VIPs from the binding address. Currently, the operator is blindly returning the first IP address of the interface: https://github.com/canonical/operator/blob/main/ops/model.py#L1100-L1133. This is very problematic when e.g., keepalived charm is running on a machine and the practitioner has picked a VIP which is alphabetically first comparing to the actual interface of the machine.
For example, when the actual interface is
10.20.0.90
and the VIP is10.20.0.80
the bind_address is returning the VIP: https://github.com/canonical/postgresql-operator/blob/main/src/charm.py#L752.As a result the application is crushing:
The text was updated successfully, but these errors were encountered: