-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overlay Network Support #88
Comments
Responding on this ticket instead of #82. So this actually works for us as it is now registering the Private IP addresses used by UCP to communicate which is just as fine as the overlay. It would be perfect if we didnt have to open the port to a dynamic port to the public IP. Is there a way of doing that? In terms of suggestions, adding a new option like:
That sounds to me like the easiest way to explicitly define the network to communicate over. |
To help me understand the use case, if you are putting them behind a load balancer do you really need dedicated ports? That's a great suggestion to add network support. And this also gives us a good path for default and deprecation in the future. 👍 |
No need for dedicated ports, I misunderstood your fix at first. Ideally though I wouldn't have to open up those random ports to the world for this to work and just expose the ports. The use case for me is the most basic really, I need a load balancer for our web traffic, port 80 and 443 across a UCP/Swarm cluster. On docker cloud I use this: Works exactly how I need interlock to. |
FYI I'm working on this. The networking on Docker Cloud is different from Swarm so things work differently. I should have something to test very soon. |
Thanks Evan, Shout if you need me to test anything. |
Dedicated ports should be doable though. With overlay networking, every container gets its own IP address so there's no contention over ports. e.g. containers named like service_app_1, service_app_2, service_app_3, and are connected to the same network can all listen on their own port 8000 so your nginx.conf can look like
if you want. |
@robbydooo there is an image available for testing overlay support. it's not finished but you can test if you want to. you will need to add a label to tell interlock what network it should expect the container to be in. i.e. ( there are still a few things to iron out but it's available for testing if you like. |
Thanks Evan, what is left to do at this stage? |
@robbydooo disconnecting the proxy containers from the networks when there are no more containers. this currently prevents removing a network because the proxy containers are always attached. |
We also noticed the published ports detail the other day, and this is good news for us. But in UCP the primary way of deployment is via docker-compose, and if the proxy containers will prevent |
@Tebro you can still use the publish port option. |
@Tebro and as i said, this is experiemental as @robbydooo was asking for early access. it's not finished but will include the disconnect from the networks. |
PR: #110 |
Currently it is required to expose ports in order for Interlock to find the node that the container is using. This would add overlay network support to "attach" to the container network and use the overlay instead of requiring a globally published port.
The text was updated successfully, but these errors were encountered: