-
-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gluetun will not start after latest image update #2375
Comments
@qdm12 is more or less the only maintainer of this project and works on it in his free time.
|
Same issue here with the same error on Synology. 2024-07-28T16:51:15Z ERROR no iptables supported found: errors encountered are: iptables-nft: iptables v1.8.10 (nf_tables): Could not fetch rule set generation id: Invalid argument (exit status 4); iptables: iptables v1.8.10 (nf_tables): Could not fetch rule set generation id: Invalid argument (exit status 4) |
Also facing same issue on synology + docker-compose. |
Exactly the same issue on synology. |
As far as I remember this was related to an old kernel that synology doesn't update. Had this in May too. Maybe that helps 👌 |
Same on my Synology. Thank you for notating the version that is working so I could roll back! Back to working for me as well |
Rolling back to 2285 does work. This is an easy change in the yaml file, but does anyone know of a way to change the version in use for containers set up within container manager on a Synology box? This has always bothered me, and would help others who need to rollback until there's a fix in place. |
Can you not set the pull name to include the :version name in Container Manager like you could when it was just Docker before? Been a while since I used it but I thought you could always tag :latest on the end of anything to get the latest (or it assumes that without) and then if you need to rollback you remove “latest” and replace with the version tag number.
…On Jul 28, 2024 at 13:50 -0500, mikul9 ***@***.***>, wrote:
Rolling back to 2285 does work. This is an easy change in the yaml file, but does anyone know of a way to change the version in use for containers set up within container manager? This has always bothered me, and would help others who need to rollback until there's a fix in place.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
The tag gets assigned when the container is created by running the image. There doesn't seem to be a way to change it afterward. How did you do it when it was Docker? It was a problem back then too. |
Same issue on my qnap NAS, pr-2285 works though |
Same issue on Linux mint, resolved with v3.38.0
|
Synology user here. Same issue as everyone else. pr-2285 works. DSM 7.2.1-69057 Update 5 if that helps |
I could be missing context, but this looks like a regression of the fix to this bug from May: #2256 |
Specifically, this commit: ddbfdc9 But I'm just a tourist in this code. No idea what else is up. |
My God... I've just spent the last 3 hours + trying to figure this out, as I've been having the same exact issue starting today. I found a |
Solved in 26705f5 |
Closed issues are NOT monitored, so commenting here is likely to be not seen. This is an automated comment setup because @qdm12 is the sole maintainer of this project |
For additional context, now that I had my breakfast after fixing this 😄.... After v3.38.0, I upgraded Alpine from 3.18 to 3.19... which has been quite troublesome, because But wait, it's not over. It also turns out Again, sorry for the turbulent latest image since v3.38.0, it's partly my fault, but I am really also upset with Alpine messing up their iptables. It's not always easy to think about all the corner cases on everyone's kernel 😄 Finally...
Please don't use |
Thanks for the quick fix! |
Fix confirmed. Thanks for the quick fix and the detailed explanation of what the cause was! |
Thank you for digging into this helping out edge cases like us! |
Is this urgent?
Yes
Host OS
DSM 7.21 (Synology)
CPU arch
x86_64
VPN service provider
AirVPN
What are you using to run the container
Other
What is the version of Gluetun
Cannot tell as it won't start, but logs state "latest"
What's the problem 🤔
After upgrading today (7/28), the container will not start. Rebuilding the project from within container manager fails with the error "Failed to start. Container for service gluetun is unhealthy." Container manager shows the existence of the gluetun container but it is grayed out. Attempting to start it from within container manager does nothing.
Container can not be reset or deleted from within container manager, but can be deleted from Portainer.
Share your logs (at least 10 lines)
Share your configuration
The text was updated successfully, but these errors were encountered: