Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Self update to avoid Docker restarts #433

Open
qdm12 opened this issue Apr 20, 2021 · 22 comments
Open

Feature request: Self update to avoid Docker restarts #433

qdm12 opened this issue Apr 20, 2021 · 22 comments
Assignees

Comments

@qdm12
Copy link
Owner

qdm12 commented Apr 20, 2021

What's the feature? 🧐

  • Self update the program, probably by having a wrapper entrypoint program that download and run it.
  • Eventually update alpine packages too
  • Only do that for latest image
  • Add an HTTP route to trigger an update
  • Set alpine dependencies in the main program (alpine version and program version) and read those by the entrypoint update program
  • Add crypto vérification (hash+signature)

Optional extra information 🚀

  • Maybe do it as a separate project so it can be used elsewhere
  • This would be done to solve the trouble of restarting connected containers. Really we don't want this container to ever go down because its network stack is used by others.
@qdm12 qdm12 self-assigned this Apr 20, 2021
@CallMeTerdFerguson
Copy link

Is this planned to be a configurable option we can turn off, or better yet a seperate build we can avoid altogether? While the convenience of not having to restart dependent containers is certainly attractive, I have serious concerns with the software stack in my VPN layer changing itself without my intervention or potentially even knowledge. VPN is one of only a few containers I specifically DON'T let watchtower autoupgrade to latest and instead pin to a specific version, because I want to review and control any changes to my security and privacy controls. It's not my intent to start any kind of holy war, but it also seems to me to be against generally accepted container best practices to build mutable containers.

@qdm12
Copy link
Owner Author

qdm12 commented Apr 27, 2021

Totally agree, this will obviously be optional 👍 Note that this will only be active for :latest images, not for tagged images. It will also be disabled by default, as I tend to break the :latest image every so and then as well. But some people don't like pulling and restarting all the connected containers (me included I guess), so that would be a good alternative. It could also be triggered manually through the HTTP control server.

@Altycoder
Copy link

Is this still planned? My linked qbittorrent container goes down whenever I update gluetun (like every day or two!) and can't be brought back to life via watchtower, I have to log into my server via ssh and remove and recreate it. Then sonarr, radarr etc complain that qbittorrent is missing so I have to restart them......

Not having so many knock-on effects from having recreated gluetun would be very worthwhile IMHO!

@qdm12
Copy link
Owner Author

qdm12 commented Jul 26, 2021

Yes still planned but 0% progress for now. It might be canceled if it gets solved another way (see below).

  1. Careful updating latest automatically with watchtower, I tend to break it from time to time 😄
  2. Are sonarr, Radarr, etc using gluetun as their network stack? if not, why do they need to be restarted?
  3. What's your host os?

I'm working on another container deunhealth to restart unhealthy containers (similar to https://github.com/willfarrell/docker-auto-healing but faster and smaller, with a streaming of events from the Docker socket), with a target usage for gluetun connected containers. I'm also thinking of having it inject a tiny program (e. g. /tmp/connectiontest) in (labeled for it) containers to run a DNS lookup to Github.com to verify they have connectivity and restart them if they don't, that would solve the connected containers problem even if they don't have a healthcheck/they are based on scratch/the end user is too lazy to configure a custom connectivity healthcheck. What do you think?

@CallMeTerdFerguson
Copy link

I'm not who you asked, but since I'm in this issue as being against the mutable container approach, thought I would add that I love that idea personally. Solves the problem you are trying to resolve in this issue without making your container harder to trace what the runtime is. Additionally, this would be generally useful even outside this scenario. Love the idea.

@Altycoder
Copy link

Yes still planned but 0% progress for now. It might be canceled if it gets solved another way (see below).

  1. Careful updating latest automatically with watchtower, I tend to break it from time to time 😄
  2. Are sonarr, Radarr, etc using gluetun as their network stack? if not, why do they need to be restarted?
  3. What's your host os?
  1. Not sure what you mean by this?
  2. Sonarr and Radarr use qbittorrent as their download client so when it goes down they raise and error which doesn't reset when qbittorrent comes back up, it's a manual intervention.
  3. I'm using Arch but with LTS kernel as my host OS.

@qdm12
Copy link
Owner Author

qdm12 commented Jul 26, 2021

  1. As in qmcgaw/gluetun:latest can break features from time to time due to me coding like a madman 😄 You can use release image tags like v3.20.0 and update release tags (every 2-3 weeks) which are meant to be more stable. Although I'm happy to have you use latest since it gives me quicker feedback on my mistakes!
  2. OK you might want to create an issue on their repo then. A program should keep on retrying the connection after a failure (with an increased backoff period). That's what's done everywhere in gluetun for example, it doesn't just "give up" if it fails 😉 But then this feature makes sense too in that niche situation
  3. Cool so shell script is an option. For now you could have a shell script running periodically with cron which updates gluetun, then restarts qbitorrent then sonarr+Radarr. Let me know if you want help coding it. And obviously disable watchtower for gluetun.

@Altycoder
Copy link

  1. Ah right ok, gotcha. What's the risk in jumping a few versions, say every 1-2 months though?
  2. Yes, might do that but I'd be surprised if someone hasn't already as they've been around for a while now
  3. No problem, I can do that. I've got one docker compose stack with ~20 services and I mainly use portainer day to day to restart/recreate etc. I only shh into my server to use the docker command line when I need to (although with Arch there are usually 2-3 kernel updates/week so I'm ssh'ing in pretty regularly anyway).

@qdm12
Copy link
Owner Author

qdm12 commented Jul 26, 2021

@agrider

Sorry I missed your comment. Thanks! I'm slowly working on it, but it should be out relatively soon, I'll ping back here once it's there. Although personally I don't like containers accessing the docker socket / injecting binaries in other containers / restarting other containers (paranoia I guess), but I would also use it... convenience > security I guess... 😄

@Altycoder

  1. Jumping every release image tag is a good idea imo. You can subscribe to the repo, click on custom and subscribe to releases only to get emailed when there is a new release. There are almost always bugs fixed and you might miss on fancy new features 😄 I keep the program retro-compatible for all the v3.x.x releases so you should not have to worry about upgrading until there is a v4.x.x release (not planned anytime).
  2. Oh man you upgrade your Kernel? I do that once a year max 🤣 You might want to split your docker compose stack perhaps? For example I have a gluetun one with gluetun and deluge only, and it's easy to update and restart both without having to restart the world.

@CallMeTerdFerguson
Copy link

@qdm12 It's definitely a balancing act to be sure, I'd recommend checking out docker-socket-proxy if you end up going down the deunhealth path. It gives a good balance between convenience and security when mounting the docker socket.

@qdm12
Copy link
Owner Author

qdm12 commented Jul 26, 2021

Ha yes, I even developed something similar in Go a few years ago. But the problem is that you then need to trust that proxy 😆Although it gets useful if you have many containers using the Docker socket. I think signed releases and/or build it yourself (with docker build https://github.com/qdm12/deunhealth.git#v0.1.0) is the best bet safety wise.

@kubax
Copy link

kubax commented Jul 26, 2021

just a side note, autoheal does only restart the container, and not rebuild them. So after Watchtower does update gluetun, the other container need a rebuild, instead of a restart.

if you are building in that direction, that would be awsome!!

@qdm12
Copy link
Owner Author

qdm12 commented Jul 26, 2021

@kubax Are you sure? They don't need to be rebuilt in my experience. Why would they need to be rebuilt? Although they might need to be stopped, removed and started instead of just restarted, that's a fair point.

@kubax
Copy link

kubax commented Jul 26, 2021

In my experience a restart results in them not finding the attached network anymore. Stopping, removing and starting might also work...

@qdm12
Copy link
Owner Author

qdm12 commented Aug 5, 2021

In my experience a restart results in them not finding the attached network anymore. Stopping, removing and starting might also work...

Actually I tested that and it seems to work when you do a docker restart connectedContaienr. I'll test it more, maybe it's slightly different in corner cases.

@agrider I made this: https://github.com/qdm12/deunhealth to restart unhealthy containers that have the deunhealth.restart.on.unhealthy=true label 😉

It's quite similar to willfarrell/autoheal but it:

  • is based on scratch so there is no OS, reducing the attack surface
  • works without network
  • is coded in Go and uses the offical Docker client libraries which are also in Go
  • it streams events so there is no check period, it automagically detects unhealthy containers at the same time as the Docker daemon does

There is also more to come in the coming days which would especially fit gluetun's use case such as:

Trigger mechanism such that a container restart triggers other restarts

@nhubert
Copy link

nhubert commented Jan 28, 2022

Hey @qdm12, any progress on that one? I am having the same issue where Watchtower updates Gluetun and then deluge stops working even after a restart. I get a "No such container blabla" error message. Like @kubax , I need to rebuild the containers in order to get the network connection back up.

I am thinking of excluding Gluetun and Deluge from Watchtower. Do you have any suggestion on the current best approach for that issue?

Thank you

@qdm12
Copy link
Owner Author

qdm12 commented Jan 28, 2022

I believe you can update connected containers, it's just updating gluetun will disconnect containers. I personally just do docker-compose down && docker-compose pull && docker-compose up -d on my gluetun stack. I'm still working on deunhealth but I had other priorities here and there so that was paused, I'll get to it again soon.

@nhubert
Copy link

nhubert commented Jan 28, 2022

This issue conversation captures the whole problem: qdm12/deunhealth#11

Thank you @qdm12, it's not urgent, thanks for the time your putting into maintaining this.

@weirlive
Copy link

any progress here? Can I just skip updating Gluetun?

@qdm12
Copy link
Owner Author

qdm12 commented Sep 12, 2022

Slowly working on it (snail speed). But yes you can skip updating gluetun. I personally only update manually every month or so together with its associated docker images in its stack.

@seth100
Copy link

seth100 commented Jun 22, 2023

I look forward to having this feature! Thanks

@codemistake
Copy link

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants