Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compose cannot recreate a container on Swarm if the container has an host port mapping #2866

Closed
jpetazzo opened this issue Feb 9, 2016 · 10 comments · Fixed by #2894
Closed

Comments

@jpetazzo
Copy link

jpetazzo commented Feb 9, 2016

What I have

  • 5 nodes cluster
  • Compose 1.6.0
  • Swarm 1.1.0
  • Engine 1.10.0

What I do

With the following docker-compose.yml:

www:
  image: nginx
  ports:
    - "8888:80"

I run:

docker-compose up -d
docker-compose up -d --force-recreate

What I expect to see

Container is created, then recreated.

What I see instead

Container is created, but when trying to recreate it, I see:

Recreating repro_www_1
ERROR: unable to find a node that satisfies container==a0e31139c706d24a739b88034e1f0b266eb049b6b304580c3b29bf4535d1c958

(Where a0e3... is the ID of the existing container.)

Cause

Compose provides a placement constraint to force the new container to be on the same host as the old one. But when there is a port mapping, this fails, even if the container has been stopped first, because Swarm doesn't deal with port mappings the same way as Engine does (a stopped container is still considered to use the port).

I attached the logs of docker inspect (before recreating the container) and the verbose output of docker-compose up -d.

inspect.log.txt
compose.log.txt

@dnephin
Copy link

dnephin commented Feb 9, 2016

We discussed this a bit, and the only solution we have right now is to not set the container== constraint if the container doesn't have volumes to copy over. That way at least we reduce the problem to "only containers that use both a host port and shared volumes", but I don't know of a way to fix it for everything.

@jpetazzo
Copy link
Author

jpetazzo commented Feb 9, 2016

I have one use case where I need both placement constraint and port mapping: my load balancers are using volumes for dynamic reconfiguration.

Maybe Compose could do an indirection, i.e.:

  • identify the node on which the container is running
  • destroy the old container
  • start the new container with affinity

(A similar plan is to create a dummy container to be used as an anchor for constraints; I think this is what was done in early Compose version to manage volumes.)

We might also want to punt this to Swarm to change the way host port mappings are handled but that might provoke other breakage on their end!

@aanand
Copy link

aanand commented Feb 9, 2016

I find it a bit odd that Swarm considers a stopped container to be using the port. That pretty much breaks Compose's recreate logic, along with what seems like a generally reasonable use case outside of Compose (stop container A, start container A', remove container A).

@dnephin
Copy link

dnephin commented Feb 9, 2016

It is a bit unexpected at first, I added this note to the swarm docs a while ago to make it clear: https://docs.docker.com/swarm/scheduler/filter/#configure-the-available-filters:03285bdc704a53de2afbdd4c6c57e10d

I think it's necessary though (at least with the current architecture). If you check constraints when starting the container (instead of creating it), then you'd end up with a ton of race conditions where multiple containers get created on a node, then one starts and all those containers now refuse to start because the constraints aren't valid anymore (the started container stole the resources). It would also be an issue with restart policies.

@dnephin
Copy link

dnephin commented Feb 9, 2016

Maybe Compose could do an indirection

That might require compose to be a lot more aware of swarm.

It is possible that we could destroy the container before removing it. The problem with that approach is that if there is an error after we remove the original container, we completely lose track of the volumes that need to be applied to the new container (which is really bad).

True, we could also re-introduce the intermediate container that we used before rename existed. Copy the volumes onto the intermediate (which doesn't have the port binding), then onto the final container. I'd really like to avoid re-introducing the intermediate container.

What if you were to use named volumes instead of the unnamed container volumes? If we did #2866 (comment) and you were able to use named volumes, I think it might work.

@dnephin dnephin added this to the 1.6.1 milestone Feb 11, 2016
@dnephin dnephin self-assigned this Feb 11, 2016
@dnephin dnephin removed this from the 1.6.1 milestone Feb 18, 2016
@dnephin dnephin reopened this Feb 18, 2016
@dnephin
Copy link

dnephin commented Feb 18, 2016

We've implemented the discussed partial solution in #2894.

This is still an issue, but only when using container volumes with exposed or host ports.

Using named volumes or no volumes should work correctly.

@jpetazzo
Copy link
Author

Thank you!

@rmelick
Copy link

rmelick commented Jun 3, 2016

@dnephin , could you take a quick read through #3453? From what I understand, using volumes_from would allow things to work correctly, but that seems to not be the case.

@stale
Copy link

stale bot commented Oct 10, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Oct 10, 2019
@stale
Copy link

stale bot commented Oct 17, 2019

This issue has been automatically closed because it had not recent activity during the stale period.

@stale stale bot closed this as completed Oct 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants