You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model for SNAT creates an SNAT pool per project. In a ‘multi pair’ installation (i.e. scaling pairs horizontally) we see LBs from the same project scheduled across device pairs. This leads to the SNAT pool IPs being shared, this leads to situations where traffic to an LB on one pair is eventually ‘blackholed’ on the second pair, bringing down customer scenarios. We have fixed this by allowing the VS VIP to be used as the SNAT IP, ensuring uniqueness on device pairs. We discussed this in January, but never heard anything back.
Deployment
Multiple BIG-IP device service clusters.
Additional Notes
This is a bug in our SNAT allocations when trying to scale out a single tenant’s services across multiple DSCs of BIG-IPs.
Currently our SNATs are created based on a naming scheme which is insufficient to handle the use of multiple BIG-IP DSC with agents sharing the same environment_prefix, but a different environment_group numbers. When services requested and the agent’s capacity score dictates the placement of loadbalancing services on a different DSC then the one originially hosting services for a tenant, new SNATs are not created, but rather the same SNAT translation addresses are provisioned on both DSCs. This creates an IP address conflict on the tenant subnets.
The solution is to change the SNAT naming convention to include the environment_group number such that each DSC gets its own set of SNAT translation addresses allocated from the tenant’s subnets. The way tenant based SNAT pools are created will also have to be smart enough to attribute for the environment_group number.
The text was updated successfully, but these errors were encountered:
Agent Version
8.3.6
Operating System
N/A
OpenStack Release
Mitaka
Description
The model for SNAT creates an SNAT pool per project. In a ‘multi pair’ installation (i.e. scaling pairs horizontally) we see LBs from the same project scheduled across device pairs. This leads to the SNAT pool IPs being shared, this leads to situations where traffic to an LB on one pair is eventually ‘blackholed’ on the second pair, bringing down customer scenarios. We have fixed this by allowing the VS VIP to be used as the SNAT IP, ensuring uniqueness on device pairs. We discussed this in January, but never heard anything back.
Deployment
Multiple BIG-IP device service clusters.
Additional Notes
This is a bug in our SNAT allocations when trying to scale out a single tenant’s services across multiple DSCs of BIG-IPs.
Currently our SNATs are created based on a naming scheme which is insufficient to handle the use of multiple BIG-IP DSC with agents sharing the same environment_prefix, but a different environment_group numbers. When services requested and the agent’s capacity score dictates the placement of loadbalancing services on a different DSC then the one originially hosting services for a tenant, new SNATs are not created, but rather the same SNAT translation addresses are provisioned on both DSCs. This creates an IP address conflict on the tenant subnets.
The solution is to change the SNAT naming convention to include the environment_group number such that each DSC gets its own set of SNAT translation addresses allocated from the tenant’s subnets. The way tenant based SNAT pools are created will also have to be smart enough to attribute for the environment_group number.
The text was updated successfully, but these errors were encountered: