-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interlock removing network config of nginx container. #149
Comments
So this happens when you use Interlock in the overlay mode. Basically it checks to see if there are any containers bound to that network and removes the proxy so the network can be removed if desired. This looks like a bug as this shouldn't be happening unless you use the |
May or may not be related, but I know that nginx-based project of Mr Wilder has a similar appearing issue. |
Just to be clear - does this mean nginx (and haProxy) is not really feasible when using overlay network? |
No they are completely fine to use with overlay. I haven't been able to
|
@ehazlett I can verify that this issue does happen. My setup is:
So, when the whole shebang starts up (about 8 services), nginx is non responsive.
Ok, so then I scale nginx service down to 0, and back up to 1, and we check the network setup again. Then we have:
And as expected, nginx response with default landing page (not my web entry point sadly :( ) Ok, so then I scale the web service back to 0,and then back to 1, interlock does it's thing, aaaand nginx is dead again. Inspect networking again, everything is gone except loopback. |
Ok Thanks. Can you share the compose file you are using? I'm trying to On Sat, May 14, 2016 at 11:30 AM, Hendrik Coetzee notifications@github.com
|
Overlay network created with Had to clean up the compose file a bit to protect the innocent and the guilty alike, it should all be consistent still though (I hope). Also, nothing pertinent was removed. Hope it helps - I'm going to switch to haProxy quickly and see if the same thing happens.
|
I did similar steps to @demaniak i created an external overlay network So different subnet. That shouldn't effect it should it? |
I was wondering how you were routing between subnets since the external is on a /16. As long as the proxy container has a container attached it won't be removed. However, if there are no more referenced containers also attached to the network it will remove the proxy container from it. The proxy container is intended to be a "black box" in that Interlock assumes full management over it. |
Same problem here, I tried to use v1.1.1 without overlay support (I don't set any interlock.network label) in a Swarm cluster and when Interlock container restart proxy container, all network interfaces are dropped so the proxy is unreachable from outside. Tell me if I can help you with some more infos. |
I've also tried the same with Docker 1.11.1 and wrote up reproducible steps here: https://github.com/yongshin/docker-node-app-swarm-nginx This is what I see in my docker-compose logs:
|
Hi guys, I have very same problem, whenever interlock reloads nginx it drops all its networks. Info: version: "2"
services:
interlock:
image: ehazlett/interlock:1.1.3
command: -D run -c /etc/interlock/config.toml
container_name: interlock
ports:
- 8080
environment:
INTERLOCK_CONFIG: |
ListenAddr = ":8080"
DockerURL = "172.28.128.200:2376"
TLSCACert = "/etc/docker/ca.pem"
TLSCert = "/etc/docker/server.pem"
TLSKey = "/etc/docker/server-key.pem"
[[Extensions]]
Name = "nginx"
ConfigPath = "/etc/nginx/nginx.conf"
PidPath = "/var/run/nginx.pid"
TemplatePath = ""
MaxConn = 1024
Port = 80
NginxPlusEnabled = false
volumes:
- /etc/docker:/etc/docker
- nginx:/etc/nginx
nginx:
image: nginx:latest
entrypoint: nginx
command: -g "daemon off;" -c /etc/nginx/nginx.conf
ports:
- "80:80"
labels:
- "interlock.ext.name=nginx"
links:
- interlock:interlock
volumes:
- nginx:/etc/nginx
volumes:
nginx:
driver: local Swarm contains 2 nodes (virtual boxes) created by docker machine, with generic driver:
Interlock logs:
Running containers:
Docker Inspect of nginx: [
{
"Id": "45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314",
"Created": "2016-05-27T12:18:15.354680295Z",
"Path": "nginx",
"Args": [
"-g",
"daemon off;",
"-c",
"/etc/nginx/nginx.conf"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 32405,
"ExitCode": 0,
"Error": "",
"StartedAt": "2016-05-27T14:24:49.478530388Z",
"FinishedAt": "2016-05-27T14:24:46.532584224Z"
},
"Image": "sha256:b1fcb97bc5f6effb44ba0b5d60bf927e540dbdcfe091b1b6cd72f0081a12207c",
"ResolvConfPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/hostname",
"HostsPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/hosts",
"LogPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314-json.log",
"Name": "/proxy_nginx_1",
"RestartCount": 0,
"Driver": "aufs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"proxy_nginx:/etc/nginx:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "proxy_default",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "80"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"StorageOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"BlkioIOps": 0,
"BlkioBps": 0,
"SandboxSize": 0
},
"GraphDriver": {
"Name": "aufs",
"Data": null
},
"Mounts": [
{
"Name": "proxy_nginx",
"Source": "/var/lib/docker/volumes/proxy_nginx/_data",
"Destination": "/etc/nginx",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "45435e365e89",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.11.0-1~jessie"
],
"Cmd": [
"-g",
"daemon off;",
"-c",
"/etc/nginx/nginx.conf"
],
"Image": "nginx:latest",
"Volumes": {
"/etc/nginx": {}
},
"WorkingDir": "",
"Entrypoint": [
"nginx"
],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "c3814c416b84461eb48defdac2294eebd2243f9df646e285e5271399d07c289c",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "proxy",
"com.docker.compose.service": "nginx",
"com.docker.compose.version": "1.7.1",
"com.docker.swarm.constraints": "[\"node==ubuntu14\"]",
"com.docker.swarm.id": "bc0861ad423c490353b82028e9701b3c28c1f8d980a9cd5c522a02468508d099",
"interlock.ext.name": "nginx"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f2ade20636bd2072457ba420dbf1199fc3fb1bc361b876a6343f12dcc22086c7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"443/tcp": null,
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
}
]
},
"SandboxKey": "/var/run/docker/netns/f2ade20636bd",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
}
] Reconnecting to default network:
solves the problem, but then again it needs to be done after every nginx reload :( |
Ok I've updated and created a test image ( |
Looks like it works! At least on my swarm setup. Big thanks! |
I also verified it works for me on my swarm setup as well. |
Awesome thanks for replying!
|
Great! It works on my swarm cluster as well. how long do you think until you release 1.2. I will be running the test image on my production cluster until then. Is that risky? |
It shouldn't be too long before release. As long as you don't pull anything the image won't change on your local system and you should be good to test. Thanks! |
Fixed in 401dbd9 |
The master image doesn't seem to have these changes |
support multiple services with same secret
I think this may be a replication of #88
here is the network settings of my nginx container
Networks is empty... so it doesn't work with my overlay network. so i have to keep reconnecting it to the network.
then i see this entry in interlock logs
interlock_1 | DEBU[0001] disconnecting proxy container from network: id=LONG_STRING_ID net=code_toolkit-net ext=lb
why is this doing this? it's annoying, am i doing this incorrectly or is there an option in interlock to add the proxy to a certain network every time it reloads it.
this is my docker-compose...
once again this could be a replication of #88 but i just wanted to be sure.
The text was updated successfully, but these errors were encountered: