-
-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help: No connection after Server reboots #504
Comments
This is expected unfortunately because of How Docker handles networking. If you find a solution to this please please let me know though. The workaround is to have everything in the same docker-compose.yml and |
I could not get the command for me to run on bootup with cron, but I found this way that makes that the connect containers are connected to the internet. The way I did this is to have health checks for those two containers and another container called autoheal that would restart the containers if they are unhealthy. I found this website that had a lot of helpful docker config files to base my compose file off of: https://www.gitmemory.com/issue/htpcBeginner/docker-traefik/35/742486145 With this method it will work to make sure if the server reboots, it will restart the containers, but it, unfortunately, does not work with watchtower, since when the watchtower updates it has to recreates with a different container ID and therefore the container dependant containers can not find gluetun unless they are recreated as well. I just turned off updates for gluetun so I do not have this issue. Hope this helps someone! Here is my updated config file: ---
version: "2.1"
networks:
web:
external: true
internal:
external: false
services:
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
volumes:
- ~/docker-services/gluetun:/gluetun
restart: unless-stopped
environment:
- VPNSP=windscribe
- REGION=US East
- OPENVPN_USER=****
- OPENVPN_PASSWORD=*****
- TZ=America/New_York
cap_add:
- NET_ADMIN
ports:
- 9091:9091
- 9117:9117
- 49153:49153
- 49153:49153/udp
- 7000:8000/tcp
networks:
- internal
- web
labels:
# com.centurylinklabs.watchtower.depends-on: jackett,transmission
com.centurylinklabs.watchtower.enable: false
transmission:
image: linuxserver/transmission:latest
container_name: transmission
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- TRANSMISSION_WEB_HOME=/combustion-release/ #optional
volumes:
- ~/docker-services/transmission/config:/config
network_mode: "service:gluetun"
depends_on:
- gluetun
restart: always
labels:
- autoheal=true
healthcheck:
test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
interval: 30s
timeout: 2s
retries: 1
jackett:
image: linuxserver/jackett:latest
container_name: jackett
restart: always
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- RUN_OPTS=run options here #optional
volumes:
- ~/docker-services/jackett/config:/config
- /mnt/seagate/Media/torrents/completed:/downloads
network_mode: "service:gluetun"
depends_on:
- gluetun
healthcheck:
test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
interval: 30s
timeout: 2s
retries: 1
labels:
- autoheal=true
autoheal:
container_name: autoheal
image: willfarrell/autoheal:latest
restart: always
environment:
- TZ=America/New_York
- AUTOHEAL_START_PERIOD=45
- AUTOHEAL_INTERVAL=30
# - AUTOHEAL_CONTAINER_LABEL=all
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro |
Another solution not requiring a program to access the docker.sock ( Define for each connected container the healthcheck For example for docker-compose.yml: healthcheck:
test: nslookup github.com || kill 1
interval: 5s
timeout: 3s
retries: 1
start_period: 10s # it usually takes 10s for gluetun vpn to be connected I haven't tested it yet, but feel free to try it out 😉 EDIT: For now that seems to kill it but it doesn't restart, strange. |
@TJJP I made this: https://github.com/qdm12/deunhealth to restart unhealthy containers that have the It's quite similar to
There is also more to come in the coming days which would especially fit gluetun's use case such as:
I'll document it in the Wiki 😉 |
Thank you! It is really cool. I tried it out and it works out! Is there a way to have it recreate the container instead of just stop it? Maybe there could be a label that would if a container has it would force-recreate it. My biggest problem with the
Thanks! |
It should restart the container, strange. Maybe this is due to your restart policy being
Ah interesting, I didn't know that they changed IDs when gluetun would change. I'll also do some testing, that's definitely useful for that |
deunhealth does restart the container, normally if my server restarts or if it gets unhealthy. That part works great 😜 I have my restart policy to "always". My problem is when watchtower pulls down an update and attempts to restart gluetun (with a new docker ID) as well as other attached containers (which fails as they can not find gluetun anymore because of the change of the docker ID). It seems that when starting a container, with the network is set as another container name, docker will set the network to a specific docker ID instead of just the name. That ID will only be updated when if each dependant container is recreated, but watchtower will only does not recreate them. |
Ah I get it. So restarting gluetun isn't the same as pulling a newer image and then re-creating it. I'll see what I can do, there might be a way to get the new gluetun network config and patch other containers with the new hostname, maybe even without a restart. But clearly my TODO I mentioned (trigger restarts from a restart) won't help actually here. |
Is this urgent?: No
Host OS (approximate answer is fine too): Ubuntu 18.04.5 LTS
CPU arch or device name: (GNU/Linux 5.4.0-74-generic x86_64)
What VPN provider are you using: Windscribe
What is the version of the program OpenVPN 2.5 version: 2.5.2 Unbound version: 1.13.0 IPtables version: v1.8.6
What's the problem 🤔
I am not sure if it is the right place to ask this question, but after I restart my server every container attached to Gluetun is no longer able to be accessed, I have to manually restart each container then it works again. Since it is every container, it seems to be a problem with Gluetun. I have tried to added depends-on with a condition for a service_healty, but that has not changed anything. When I run the curl ifconfig.io it just says "curl: (6) Could not resolve host: ifconfig.io." One of the services Transmission, keeps printing a error like this after restart: "Couldn't connect socket 86, port 15430 (errno 99 - Address not available) (/home/buildozer/aports/community/transmission/src/transmission-3.00/libtransmission/net.c:339)"
Share your logs... (careful to remove in example tokens)
What are you using to run your container?: Docker Compose
Please also share your configuration file:
The text was updated successfully, but these errors were encountered: