Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help: No connection after Server reboots #504

Closed
TJJP opened this issue Jun 23, 2021 · 8 comments
Closed

Help: No connection after Server reboots #504

TJJP opened this issue Jun 23, 2021 · 8 comments

Comments

@TJJP
Copy link
Contributor

TJJP commented Jun 23, 2021

Is this urgent?: No

Host OS (approximate answer is fine too): Ubuntu 18.04.5 LTS

CPU arch or device name: (GNU/Linux 5.4.0-74-generic x86_64)

What VPN provider are you using: Windscribe

What is the version of the program OpenVPN 2.5 version: 2.5.2 Unbound version: 1.13.0 IPtables version: v1.8.6

Running version latest built on 2020-03-13T01:30:06Z (commit d0f678c)

What's the problem 🤔

I am not sure if it is the right place to ask this question, but after I restart my server every container attached to Gluetun is no longer able to be accessed, I have to manually restart each container then it works again. Since it is every container, it seems to be a problem with Gluetun. I have tried to added depends-on with a condition for a service_healty, but that has not changed anything. When I run the curl ifconfig.io it just says "curl: (6) Could not resolve host: ifconfig.io." One of the services Transmission, keeps printing a error like this after restart: "Couldn't connect socket 86, port 15430 (errno 99 - Address not available) (/home/buildozer/aports/community/transmission/src/transmission-3.00/libtransmission/net.c:339)"

Share your logs... (careful to remove in example tokens)



What are you using to run your container?: Docker Compose

Please also share your configuration file:

  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    volumes:
      - ~/docker-services/gluetun:/gluetun
    restart: unless-stopped
    environment:
      - VPNSP=windscribe
      - REGION=US East
      - OPENVPN_USER=############
      - OPENVPN_PASSWORD=##########
      - TZ=America/New_York
    cap_add:
       - NET_ADMIN
    ports:
      - 9091:9091
      - 9117:9117
      - 49153:49153
      - 49153:49153/udp
    networks:
      - internal
      - web
    labels:
        com.centurylinklabs.watchtower.depends-on: jackett,transmission

  transmission:
    image: linuxserver/transmission
    container_name: transmission
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - TRANSMISSION_WEB_HOME=/combustion-release/ #optional
    volumes:
      - ~/docker-services/transmission/config:/config
      - /mnt/seagate/Media/torrents:/mnt/media/torrents
    network_mode: "service:gluetun" 
    # depends_on:
      # gluetun:
        # condition: service_healthy
#    ports:
#      - 9091:9091
#      - 49153:49153
#      - 49153:49153/udp
    restart: unless-stopped

  jackett:
    image: linuxserver/jackett
    container_name: jackett
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - RUN_OPTS=run options here #optional
    volumes:
      - ~/docker-services/jackett/config:/config
      - /mnt/seagate/Media/torrents/completed:/downloads
    network_mode: "service:gluetun" 
    # depends_on:
      # gluetun:
        # condition: service_healthy
#    ports:
#      - 9117:9117
    restart: unless-stopped
@qdm12
Copy link
Owner

qdm12 commented Jun 24, 2021

This is expected unfortunately because of How Docker handles networking. If you find a solution to this please please let me know though.

The workaround is to have everything in the same docker-compose.yml and docker-compose down && docker-compose up -d to restart EVERYTHING. Or have some shell script if your containers are split up in multiple Docker-compose.yml.

@qdm12 qdm12 closed this as completed Jun 24, 2021
@TJJP
Copy link
Contributor Author

TJJP commented Jul 12, 2021

I could not get the command for me to run on bootup with cron, but I found this way that makes that the connect containers are connected to the internet. The way I did this is to have health checks for those two containers and another container called autoheal that would restart the containers if they are unhealthy. I found this website that had a lot of helpful docker config files to base my compose file off of: https://www.gitmemory.com/issue/htpcBeginner/docker-traefik/35/742486145 With this method it will work to make sure if the server reboots, it will restart the containers, but it, unfortunately, does not work with watchtower, since when the watchtower updates it has to recreates with a different container ID and therefore the container dependant containers can not find gluetun unless they are recreated as well. I just turned off updates for gluetun so I do not have this issue. Hope this helps someone!

Here is my updated config file:

---
version: "2.1"
networks:
  web:
    external: true
  internal:
    external: false

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    volumes:
      - ~/docker-services/gluetun:/gluetun
    restart: unless-stopped
    environment:
      - VPNSP=windscribe
      - REGION=US East
      - OPENVPN_USER=****
      - OPENVPN_PASSWORD=*****
      - TZ=America/New_York
    cap_add:
       - NET_ADMIN
    ports:
      - 9091:9091
      - 9117:9117
      - 49153:49153
      - 49153:49153/udp
      - 7000:8000/tcp 
    networks:
      - internal
      - web
    labels:
        # com.centurylinklabs.watchtower.depends-on: jackett,transmission
        com.centurylinklabs.watchtower.enable: false
    
  transmission:
    image: linuxserver/transmission:latest
    container_name: transmission
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - TRANSMISSION_WEB_HOME=/combustion-release/ #optional
    volumes:
      - ~/docker-services/transmission/config:/config
    network_mode: "service:gluetun" 
    depends_on:
      - gluetun
    restart: always
    labels:
      - autoheal=true
    healthcheck:
      test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
      interval: 30s
      timeout: 2s
      retries: 1

  jackett:
    image: linuxserver/jackett:latest
    container_name: jackett
    restart: always
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - RUN_OPTS=run options here #optional
    volumes:
      - ~/docker-services/jackett/config:/config
      - /mnt/seagate/Media/torrents/completed:/downloads
    network_mode: "service:gluetun" 
    depends_on:
      - gluetun
    healthcheck:
      test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
      interval: 30s
      timeout: 2s
      retries: 1
    labels:
      - autoheal=true
  
  autoheal:
    container_name: autoheal
    image: willfarrell/autoheal:latest
    restart: always
    environment:
      - TZ=America/New_York
      - AUTOHEAL_START_PERIOD=45
      - AUTOHEAL_INTERVAL=30
      # - AUTOHEAL_CONTAINER_LABEL=all
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /etc/localtime:/etc/localtime:ro

@qdm12
Copy link
Owner

qdm12 commented Jul 20, 2021

Another solution not requiring a program to access the docker.sock (⚠️ I'm scared the hell out to let anyone access it 😄):

Define for each connected container the healthcheck nslookup github.com || kill 1 which means kill process id 1 (the parent process for any container) if the first nslookup fails. That will force the container to exit and restart (and also have restart: always of course)

For example for docker-compose.yml:

healthcheck:
  test: nslookup github.com || kill 1
  interval: 5s
  timeout: 3s
  retries: 1
  start_period: 10s # it usually takes 10s for gluetun vpn to be connected

I haven't tested it yet, but feel free to try it out 😉

EDIT: For now that seems to kill it but it doesn't restart, strange.

@qdm12
Copy link
Owner

qdm12 commented Aug 5, 2021

@TJJP I made this: https://github.com/qdm12/deunhealth to restart unhealthy containers that have the deunhealth.restart.on.unhealthy=true label 😉

It's quite similar to willfarrell/autoheal but it:

  • is based on scratch so there is no OS, reducing the attack surface
  • works without network
  • is coded in Go and uses the offical Docker client libraries which are also in Go
  • it streams events so there is no check period, it automagically detects unhealthy containers at the same time as the Docker daemon does

There is also more to come in the coming days which would especially fit gluetun's use case such as:

Trigger mechanism such that a container restart triggers other restarts

I'll document it in the Wiki 😉

@TJJP
Copy link
Contributor Author

TJJP commented Aug 6, 2021

Thank you! It is really cool. I tried it out and it works out! Is there a way to have it recreate the container instead of just stop it? Maybe there could be a label that would if a container has it would force-recreate it. My biggest problem with the willfarrell/autoheal was that if gluetun was updated, the container would get a different container id and autoheal and deunhealth could not start it because the docker could not find it, example:

2021/08/06 01:10:54 ERROR failed restarting container: Error response from daemon: Cannot restart container transmission: No such container: 7acaa8ecb27f5f09f19bcbc2acef1bee74ced02b0a33c25fd4bed712183906e2

Thanks!

@qdm12
Copy link
Owner

qdm12 commented Aug 6, 2021

Is there a way to have it recreate the container instead of just stop it

It should restart the container, strange. Maybe this is due to your restart policy being restart: "no"? i'll do more testing over the weekend.

if gluetun was updated, the container would get a different container id and autoheal and deunhealth could not start it because the docker could not find it

Ah interesting, I didn't know that they changed IDs when gluetun would change. I'll also do some testing, that's definitely useful for that Trigger mechanism such that a container restart triggers other restarts. I'll get back with fixes 😉

@TJJP
Copy link
Contributor Author

TJJP commented Aug 7, 2021

It should restart the container, strange. Maybe this is due to your restart policy being restart: "no"? i'll do more testing over the weekend.

deunhealth does restart the container, normally if my server restarts or if it gets unhealthy. That part works great 😜 I have my restart policy to "always". My problem is when watchtower pulls down an update and attempts to restart gluetun (with a new docker ID) as well as other attached containers (which fails as they can not find gluetun anymore because of the change of the docker ID). It seems that when starting a container, with the network is set as another container name, docker will set the network to a specific docker ID instead of just the name. That ID will only be updated when if each dependant container is recreated, but watchtower will only does not recreate them.

@qdm12
Copy link
Owner

qdm12 commented Aug 9, 2021

Ah I get it. So restarting gluetun isn't the same as pulling a newer image and then re-creating it. I'll see what I can do, there might be a way to get the new gluetun network config and patch other containers with the new hostname, maybe even without a restart. But clearly my TODO I mentioned (trigger restarts from a restart) won't help actually here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants