Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Connected containers crash randomly, can't restart #407

Closed
kromsam opened this issue Mar 16, 2021 · 4 comments
Closed

Bug: Connected containers crash randomly, can't restart #407

kromsam opened this issue Mar 16, 2021 · 4 comments
Assignees

Comments

@kromsam
Copy link

kromsam commented Mar 16, 2021

Is this urgent?: Kinda

Host OS: OMV5 on Armbian

device name: RockPro64

What VPN provider are you using: PIA

What are you using to run your container?: Docker Compose

What is the version of the program

Running version latest built on 2021-03-15T02:16:55Z (commit de82d4e)

What's the problem

I use gluetun for multiple containers: Lidarr, Sonarr, Radarr, Bazarr, Jackett, Transmission and Soulseek. Sometimes though, one or more of these containers randomly crash, but in a strange way. Normally, when containers crash I just run: docker-compose up -d, and we're up and running.

In this case though, I just get [container] is up to date and nothing happens. So I restart, but this happens:

[REDACTED]:~$ docker restart lidarr sonarr bazarr transmission
Error response from daemon: Cannot restart container lidarr: No such container: 4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9
Error response from daemon: Cannot restart container sonarr: No such container: 4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9
Error response from daemon: Cannot restart container bazarr: No such container: 4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9
Error response from daemon: Cannot restart container transmission: No such container: 4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9

Then I have to manually remove the containers and run docker-compose up -d again.

This problem only occurs with containers that are connected through the gluetun container. Before I restart the containers they are 'up' in the command line, and I can check the logs. But they don't have a network connection. The problem seems to liee partly there.

Relevant parts from my compose file:

# VPN
  vpn:
    container_name: vpn
    image: qmcgaw/gluetun
    cap_add:
      - NET_ADMIN
    restart: always
    network_mode: bridge
    volumes:
      - ${APPDATA}/OpenVPN:/gluetun
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - VPNSP=${VPNSP}
      - USER=${VPNUSER}
      - PASSWORD=${VPNPASS}
      - REGION=${VPNREGION}
      - PORT_FORWARDING=on
    ports:
      - 9091:9091 # Transmission
      - 5000:5000 # Soulseek
      - 5001:5001 # Soulseek
      - 9117:9117 # Jackett
      - 6767:6767 # Bazarr
      - 8686:8686 # Lidarr
      - 7878:7878 # Radarr
      - 8989:8989 # Sonarr

# Transmission
  transmission:
    container_name: transmission
    image: linuxserver/transmission
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Transmission:/config
      - ${DOWNLOADS}:/downloads
      - ${WATCHED}:/watch

# Soulseek
  soulseek_web:
    container_name: soulseek_web
    image: slskd/slskd
    restart: always
    network_mode: "service:vpn"
    environment:
      - SLSKD_SLSK_USERNAME=${SLSK_USER}
      - SLSKD_SLSK_PASSWORD=${SLSK_PW}
      - SLSKD_SLSK_LISTEN_PORT=${SLSK_PORT}
    volumes:
      - ${APPDATA}/Soulseek:/root
      - ${DOWNLOADS}/Soulseek:/var/slskd/downloads
      - ${MEDIA}:/var/slskd/shared

# Jackett
  jackett:
    container_name: jackett
    image: linuxserver/jackett
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Jackett:/config
      - ${DOWNLOADS}:/downloads

# Bazarr
  bazarr:
    container_name: bazarr
    image: linuxserver/bazarr
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Bazarr:/config
      - ${FILMS}:/movies
      - ${SERIES}:/tv

# Lidarr
  lidarr:
    container_name: lidarr
    image: linuxserver/lidarr
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Lidarr:/config
      - ${DOWNLOADS}:/downloads
      - ${MUZIEK}:/music
      - ${MEDIA}:/media

# Radarr
  radarr:
    container_name: radarr
    image: linuxserver/radarr
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Radarr:/config
      - ${DOWNLOADS}:/downloads
      - ${FILMS}:/movies

# Sonarr
  sonarr:
    container_name: sonarr
    image: linuxserver/sonarr
    restart: always
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ${APPDATA}/Sonarr:/config
      - ${DOWNLOADS}/downloads:/downloads
      - ${SERIES}:/tv

Share your logs...


2021/03/16 19:00:11 INFO OpenVPN version: 2.4.10
2021/03/16 19:00:11 INFO Unbound version: 1.10.1
2021/03/16 19:00:11 INFO IPtables version: v1.8.4
2021/03/16 19:00:11 WARN configuration: You are using the old environment variable USER, please consider changing it to OPENVPN_USER
2021/03/16 19:00:11 WARN configuration: You are using the old environment variable PASSWORD, please consider changing it to OPENVPN_PASSWORD
2021/03/16 19:00:11 INFO Settings summary below:
|--OpenVPN:
   |--Verbosity level: 1
   |--Run as root: enabled
   |--Provider:
      |--Private Internet Access settings:
         |--Network protocol: udp
         |--Regions: switzerland
         |--Encryption preset: strong
         |--Custom port: 0
         |--Port forwarding:
            |--File path: /tmp/gluetun/forwarded_port
|--DNS:
   |--Plaintext address: 1.1.1.1
   |--DNS over TLS:
      |--Unbound:
          |--DNS over TLS providers:
              |--cloudflare
          |--Listening port: 53
          |--Access control:
              |--Allowed:
                  |--0.0.0.0/0
                  |--::/0
          |--Caching: enabled
          |--IPv4 resolution: enabled
          |--IPv6 resolution: disabled
          |--Verbosity level: 1/5
          |--Verbosity details level: 0/4
          |--Validation log level: 0/2
          |--Blocked hostnames:
          |--Blocked IP addresses:
              |--127.0.0.1/8
              |--10.0.0.0/8
              |--172.16.0.0/12
              |--192.168.0.0/16
              |--169.254.0.0/16
              |--::1/128
              |--fc00::/7
              |--fe80::/10
              |--::ffff:0:0/96
          |--Allowed hostnames:
      |--Block malicious: enabled
      |--Update: every 24h0m0s
|--Firewall:
|--System:
   |--Process user ID: 1000
   |--Process group ID: 100
   |--Timezone: [REDACTED]
|--HTTP control server:
   |--Listening port: 8000
   |--Logging: enabled
|--Public IP getter:
   |--Fetch period: 12h0m0s
   |--IP file: /tmp/gluetun/ip
|--Github version information: enabled
2021/03/16 19:00:12 INFO storage: merging by most recent 7350 hardcoded servers and 7350 servers read from /gluetun/servers.json
2021/03/16 19:00:12 INFO routing: default route found: interface eth0, gateway 172.17.0.1
2021/03/16 19:00:12 INFO routing: local subnet found: 172.17.0.0/16
2021/03/16 19:00:12 INFO routing: default route found: interface eth0, gateway 172.17.0.1
2021/03/16 19:00:12 INFO routing: adding route for 0.0.0.0/0
2021/03/16 19:00:12 INFO firewall: firewall disabled, only updating allowed subnets internal list
2021/03/16 19:00:12 INFO routing: default route found: interface eth0, gateway 172.17.0.1
2021/03/16 19:00:12 INFO openvpn configurator: checking for device /dev/net/tun
2021/03/16 19:00:12 WARN TUN device is not available: open /dev/net/tun: no such file or directory
2021/03/16 19:00:12 INFO openvpn configurator: creating /dev/net/tun
2021/03/16 19:00:12 INFO firewall: enabling...
2021/03/16 19:00:12 INFO firewall: enabled successfully
2021/03/16 19:00:12 INFO http server: listening on 0.0.0.0:8000
2021/03/16 19:00:12 INFO healthcheck: listening on 127.0.0.1:9999
2021/03/16 19:00:12 INFO dns over tls: using plaintext DNS at address 1.1.1.1
2021/03/16 19:00:12 INFO firewall: setting VPN connection through firewall...
2021/03/16 19:00:12 INFO openvpn configurator: starting openvpn
2021/03/16 19:00:12 INFO openvpn: OpenVPN 2.4.10 aarch64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Jan  4 2021
2021/03/16 19:00:12 INFO openvpn: library versions: OpenSSL 1.1.1j  16 Feb 2021, LZO 2.10
2021/03/16 19:00:12 INFO openvpn: CRL: loaded 1 CRLs from file [[INLINE]]
2021/03/16 19:00:12 INFO openvpn: TCP/UDP: Preserving recently used remote address: [AF_INET]212.102.37.239:1197
2021/03/16 19:00:12 INFO openvpn: UDP link local: (not bound)
2021/03/16 19:00:12 INFO openvpn: UDP link remote: [AF_INET][REDACTED]:1197
2021/03/16 19:00:12 INFO openvpn: [zurich402] Peer Connection Initiated with [AF_INET][REDACTED]:1197
2021/03/16 19:00:14 INFO openvpn: OpenVPN ROUTE6: OpenVPN needs a gateway parameter for a --route-ipv6 option and no default was specified by either --route-ipv6-gateway or --ifconfig-ipv6 options
2021/03/16 19:00:14 INFO openvpn: OpenVPN ROUTE: failed to parse/resolve route for host/network: 2000::/3
2021/03/16 19:00:14 INFO openvpn: TUN/TAP device tun0 opened
2021/03/16 19:00:14 INFO openvpn: /sbin/ip link set dev tun0 up mtu 1500
2021/03/16 19:00:14 INFO openvpn: /sbin/ip addr add dev tun0 [REDACTED]/24 broadcast [REDACTED]
2021/03/16 19:00:14 WARN openvpn: OpenVPN was configured to add an IPv6 route over tun0. However, no IPv6 has been configured for this interface, therefore the route installation may fail or may not work as expected.
2021/03/16 19:00:14 INFO openvpn: Initialization Sequence Completed
2021/03/16 19:00:14 INFO VPN routing IP address: [REDACTED]
2021/03/16 19:00:14 INFO dns over tls: downloading DNS over TLS cryptographic files
2021/03/16 19:00:14 INFO healthcheck: healthy!
2021/03/16 19:00:17 INFO dns over tls: downloading hostnames and IP block lists
2021/03/16 19:00:18 INFO dns over tls: init module 0: validator
2021/03/16 19:00:18 INFO dns over tls: init module 1: iterator
2021/03/16 19:00:19 INFO dns over tls: start of service (unbound 1.10.1).
2021/03/16 19:00:19 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
2021/03/16 19:00:19 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
2021/03/16 19:00:19 INFO dns over tls: ready
2021/03/16 19:00:20 INFO You are running on the bleeding edge of latest!
2021/03/16 19:00:20 INFO VPN gateway IP address: [REDACTED]
2021/03/16 19:00:20 INFO port forwarding: Found persistent forwarded port data for port 44658
2021/03/16 19:00:20 INFO port forwarding: Forwarded port data expires in 33 days
2021/03/16 19:00:20 INFO port forwarding: Port forwarded is 44658 expiring in 33 days
2021/03/16 19:00:20 INFO port forwarding: Writing port to /tmp/gluetun/forwarded_port
2021/03/16 19:00:20 INFO firewall: setting allowed input port 44658 through interface tun0...
2021/03/16 19:00:22 INFO ip getter: Public IP address is [REDACTED] (Switzerland, Zurich, Zürich)

@qdm12
Copy link
Owner

qdm12 commented Apr 11, 2021

Hi there, sorry for the long delay, it seems I lost track of this issue.

Did you manage to make it work in the end?

If not:

  1. What would be the container with ID 4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9 (find it with docker ps)?
  2. Does gluetun exits and then other containers don't work?

As a workaround, note that you can use docker-compose up -d --force-recreate to remove all the containers and restart them.

@qdm12
Copy link
Owner

qdm12 commented Apr 19, 2021

Closing due to inactivity, feel free to comment back and I'll re-open the issue.

@qdm12 qdm12 closed this as completed Apr 19, 2021
@panomitrius
Copy link

panomitrius commented Apr 22, 2022

I'd like to re-open this issue @qdm12. I have the exact same problem. I can answer your questions how they are for me.

Hi there, sorry for the long delay, it seems I lost track of this issue.

Did you manage to make it work in the end?

If not:

1. What would be the container with ID `4e1e287b9ea6492083d8e76067014a9d298a69b87900c766647d549d0b84ded9` (find it with `docker ps`)?

There's not such container, it comes up with a non existing container ID with a too long of an ID.

2. Does gluetun exits and then other containers don't work?

Gluetun doesn't exit for me. But this seems to be happening when (at least some of the times) when gluetun is updated (I'm updating all containers with Watchtower regularly).

I've reported more details here, but this is apparently the right place for posting the issue.

@qdm12
Copy link
Owner

qdm12 commented Apr 23, 2022

@panomitrius This is a well known docker bug. See #641 and issues from https://github.com/qdm12/deunhealth

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants