-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot start container xxxxx: container yyyyy not found, impossible to mount its volumes #1090
Comments
Dependent containers will be automatically co-scheduled once #972 is merged. |
My container wasn't linked to any others, but the layers were not on the same hosts. Thanks. |
@frntn it should be fixed if you use the last version of swarm and compose, swarm has the builtin https://github.com/docker/swarm/tree/master/scheduler/filter#dependency-filter and #972 was merged. Can you retry ? To containers using volumes-from (I assume it's you case) will end up on the same node. |
Hello @vieux and thanks for your reply. I have retried and the problem remains. Start without composeWhen I am using the docker run command all is ok : docker run
--name loadbalancer \
-p 443 \
-e constraint:env==integ \
-e constraint:type==dmz \
-v /data/etc/haproxy/haproxy.cfg:/etc/haproxy/haproxy.cfg \
ekino/haproxy:base Start with composeBut when using the equivalent with docker compose as below :
Then I get the following error message : Cannot start container c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024: container cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4 not found, impossible to mount its volumes AnalyzeIn swarm log file I see : ~: grep -B 1 c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024 swarm.log
time="2015-04-09T19:12:53+02:00" level=info msg="HTTP request received" method=POST uri="/v1.14/containers/create"
time="2015-04-09T19:12:53+02:00" level=info msg="HTTP request received" method=GET uri="/v1.14/containers/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024/json"
time="2015-04-09T19:12:53+02:00" level=info msg="HTTP request received" method=POST uri="/v1.14/containers/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024/start"
time="2015-04-09T19:12:53+02:00" level=debug msg="Proxy request" method=POST url=http://x.x.x.x:2375/v1.14/containers/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024/start
Cannot start container c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024: container cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4 not found, impossible to mount its volumes And when checking on the nodes I see the two layers are not on the same node : ~: for i in $(seq 1 4); do echo "==> node$i" ; ssh node$i "sudo find /var/lib/docker -name c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024 -or -name cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4"; echo; done
==> node1
==> node2
/var/lib/docker/aufs/diff/cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4
/var/lib/docker/aufs/mnt/cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4
/var/lib/docker/aufs/layers/cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4
/var/lib/docker/execdriver/native/cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4
/var/lib/docker/containers/cd61f82f02acd777e8fb3b4d348b77d50a5a156692ae77229e34395470a586a4
==> node3
==> node4
/var/lib/docker/aufs/diff/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024
/var/lib/docker/aufs/mnt/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024
/var/lib/docker/aufs/layers/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024
/var/lib/docker/containers/c5d1a59cd4e8b62ed707a7a1b8f313f4f264de0196432d3d417cea9d3914e024 And my swarm state is :
Side note : Before retrying I had removed all the containers with the |
Digging into it right now. Think I have a lead. |
I have tcpdumped the HTTP requests between :
(The details for the run command and the content of the the yaml are available in my comment above)
|
Ah, this sounds like an issue that would be resolved by #874 |
Indeed it seems so. It is a disruptive change and it may not be that trivial... For now I will go back to my shell scripts, and keep an eye on this issue before further compose integration in my projects. Thanks everyone ! |
WorkaroundAs I now understand the issue I have managed to apply this very simple workaround : kill and remove all container so docker-compose -p integ kill -s SIGKILL
docker-compose -p integ rm --force
docker-compose -p integ up -d Now everything's working great \o/ Thanks ! :) I keep this issue open as this is just a workaround. |
#1349 is now merged. |
Any update? was this fixed? |
Closing since there was no response (and I believe it to be fixed). |
Context
I have :
docker client
setup to talk to my localswarm manager
swarm manager
setup to talk to 2 remote nodesnode1
andnode2
docker daemon
binded on a specificip:port
.What I do
When using my local
docker client
I can manage my remote containers viaswarm
without any problem.But then I have started using
compose
...What I get
Everything is fine at first
docker-compose up -d
run, but then almost every re-run I get this kind of error :Searching around it turns out the first sha1 "xxxxx" is on
node1
while the sha1 "yyyyy" is onnode2
(!)What I expected
Well... I'd like every layer to be created on the same node.
What I think
I'll dig into it later but I think it's an issue with the
constraint:key==value
environment variableThe text was updated successfully, but these errors were encountered: