Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlay Network Support #88

Closed
ehazlett opened this issue Mar 9, 2016 · 13 comments
Closed

Overlay Network Support #88

ehazlett opened this issue Mar 9, 2016 · 13 comments

Comments

@ehazlett
Copy link
Owner

ehazlett commented Mar 9, 2016

Currently it is required to expose ports in order for Interlock to find the node that the container is using. This would add overlay network support to "attach" to the container network and use the overlay instead of requiring a globally published port.

@robbydooo
Copy link

Responding on this ticket instead of #82.

So this actually works for us as it is now registering the Private IP addresses used by UCP to communicate which is just as fine as the overlay. It would be perfect if we didnt have to open the port to a dynamic port to the public IP. Is there a way of doing that?

In terms of suggestions, adding a new option like:

  • "interlock.network=networkname"

That sounds to me like the easiest way to explicitly define the network to communicate over.

@ehazlett
Copy link
Owner Author

ehazlett commented Mar 9, 2016

To help me understand the use case, if you are putting them behind a load balancer do you really need dedicated ports?

That's a great suggestion to add network support. And this also gives us a good path for default and deprecation in the future. 👍

@robbydooo
Copy link

No need for dedicated ports, I misunderstood your fix at first. Ideally though I wouldn't have to open up those random ports to the world for this to work and just expose the ports.

The use case for me is the most basic really, I need a load balancer for our web traffic, port 80 and 443 across a UCP/Swarm cluster.

On docker cloud I use this:
https://github.com/tutumcloud/haproxy

Works exactly how I need interlock to.

@ehazlett
Copy link
Owner Author

FYI I'm working on this. The networking on Docker Cloud is different from Swarm so things work differently.

I should have something to test very soon.

@robbydooo
Copy link

Thanks Evan,

Shout if you need me to test anything.

@etoews
Copy link
Contributor

etoews commented Mar 11, 2016

Dedicated ports should be doable though. With overlay networking, every container gets its own IP address so there's no contention over ports.

e.g. containers named like service_app_1, service_app_2, service_app_3, and are connected to the same network can all listen on their own port 8000 so your nginx.conf can look like

upstream service {
  server service_app_1:8000;
  server service_app_2:8000;
  server service_app_3:8000;
}

if you want.

@ehazlett
Copy link
Owner Author

@robbydooo there is an image available for testing overlay support. it's not finished but you can test if you want to. you will need to add a label to tell interlock what network it should expect the container to be in. i.e. (--label interlock.network=app). the image is available at ehazlett/interlock:overlay

there are still a few things to iron out but it's available for testing if you like.

@robbydooo
Copy link

Thanks Evan, what is left to do at this stage?

@ehazlett
Copy link
Owner Author

@robbydooo disconnecting the proxy containers from the networks when there are no more containers. this currently prevents removing a network because the proxy containers are always attached.

@Tebro
Copy link

Tebro commented Mar 24, 2016

We also noticed the published ports detail the other day, and this is good news for us. But in UCP the primary way of deployment is via docker-compose, and if the proxy containers will prevent docker-compose down from working due to the network being attached that is kind of a show stopper for us at this time.

@ehazlett
Copy link
Owner Author

@Tebro you can still use the publish port option.

@ehazlett
Copy link
Owner Author

@Tebro and as i said, this is experiemental as @robbydooo was asking for early access. it's not finished but will include the disconnect from the networks.

@ehazlett
Copy link
Owner Author

PR: #110

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@ehazlett @etoews @robbydooo @Tebro and others