-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Issue restarting containers using network in other stack #11
Comments
i seccond that... Thats exactly my problem... i have disabled updates for gluetun to stop my containers from dangling without network. if that is fixable, i would be very glad!!! |
That's really strange. So the container can no longer be found with its container ID?! I'll do some more testing. Meanwhile I'm almost done on a cascaded restart feature which should restart containers labeled for it when a certain container starts (like gluetun). |
Ah got it. It's because the container ID it was relying on (gluetun) disappeared. Ugh, that's also going to be problematic for my cascaded restart feature... I think the (connected) container config needs to be patched somehow, before being restarted 🤔 |
Ok so after some research... There is no way to know what the 'vpn' container was since we just have its ID and it no longer exists (the name is not accessible). I guess it could stop it, but it wouldn't be able to start it again, so that's a bit pointless sadly. Now on my cascaded restart feature, the idea is that you would put a label on the 'connected' containers indicating the container name of the 'vpn' container. That way, this is feasible. Writing out how it should do it (also for myself):
I have bits and pieces of it ready, I just need to wire everything up and try it out, but it should work fine. |
So... this previous suggestion, let's call it I also came up with another solution, let's call it
Solutions comparison
Now what solution do you prefer 😄 ???? I'm leaning towards |
Personally I lean towards B as well. Involves more up front config with labels, but it allows for more verbosity with what is connected, forcing the user to make that link. Solution A, Auto monitoring and logging container information isn't a terrific solution to me. Solution C, dropping context of containers seems like too much effort, and could cause some issue if someone has multiple stacks with overlapping configured names over a cluster...bad practice, but could cause a headache for someone down the line. |
I pick B. I was elected to lead, not to read! (SCNR) Labels would be perfectly fine for me. Also it sounds like a litle less work from your side, with the labels implementation. |
Another vote for option B. |
+1 for option B and do you know when it will be released ? |
+1 for option B |
I'm working on it right now! Hopefully we will have something today 😉 EDIT (2021-12-06): still working on it, it's a bit more convoluted than I expected code-spaghetti wise, but it's getting there! |
Note if the 'network container' (aka the vpn) goes down and doesn't restart, there is no way to restart properly the connected containers since the label won't be anywhere unfortunately. I will make the program log it out as a warning if this happens. |
i'm not sure if i got this right. you are not able to restart the "child" containers, if the vpn server did kill itself and did not restart, right? But if the container is updated and did restart without errors, that is still possible to fix with the intended patch? |
In my case I just need to recreate containers attached to the network container when recreated by watchtower. The network container is always up and running but the others containers are orphans and cannot be restarted. |
any eta ? |
Has this been implemented yet? |
A little late to the party here, but definitely also prefer option B and I'm very excited about this feature. (yes, my gluetun container got updated by watchtower last night and now the whole stack is down 😄 ) |
Hello all, good news, I'm working again on this. Sorry for the immense delay I took to get back working on this. |
Should this already be working in a current version combined with using deunhealth? |
Any update? :) |
I guess Quentin hasn't had time to implement the deunhealth log states 0 containers monitored, despite tagging several containers with
I turn my mini-PC media server off every evening. So I've been able to use a shell script that does a |
@STRAYKR Is your deun container in the same yml as gluetun? That was my issue. Logs showed "Monitoring 0 containers" when I added the label to gluetun but deun was in its own yml. When I moved deun to the same yml compose as gluetun and qbittorrent, deun registered the labels and started monitoring the containers. I'm thinking, for my case, that the issue might've been that deun couldn't reach gluetun because it wasn't on the same network. |
Hello guys. `2023/12/30 19:07:39 INFO container qbittorrent (image lscr.io/linuxserver/qbittorrent:latest) is unhealthy, restarting it... 2023/12/30 19:18:51 INFO container transmission (image lscr.io/linuxserver/transmission:latest) is unhealthy, restarting it... |
Hi @NaturallyAsh, sorry for the delayed response, yes, all config for deun and gluetun is in the same yml docker compose file, I only have the one docker compose file. |
hi guys |
I'm trying to configure everything to be automated with updates and availability using watch tower and deunhealth. I was doing testing to see what would happen if gluetun got an update (as you know it breaks things connected to it when it restarts). I get the following errors when stopping/restarting gluetun:
I believe that the gluetun container is the one that's referenced by that hash, so it disappears and deunhealth doesn't know how to handle it.
I don't think it's worth noting, but I am using portainer for stack management. Here are my config files of what I'm trying to do:
The text was updated successfully, but these errors were encountered: