-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s] Lost DB during update #3714
Comments
K8s is not official supported and most likely you are not pointing to the same volume after update. |
This is not a K8S issue. K8S is just how I run it. I was greeted after the update by the "Let's set your admin password" flow. |
You can post the container logs to help troubleshooting. |
Unfortunately I no longer have logs as I killed the pod to force the update. That confirms that my storage config should be fine. |
noteas that may be helpful:
|
I cannot reproduce this with docker:
This leads me to believe that there is something else going on. |
Another note: How did you upgrade the apps version exactly? |
Using k8s, if you kill a pod (that was made in a deployment), a new pod gets created as replacement. So to upgrade, I simply killed the older pod. Storage is handled by k8s and as long as you have storage outside of the container (ie a mounted volume), k8s makes sure the old pod is down before the new one takes over storage.
This is a fair question but this is not the case and the test I made confirms it is not what is happening. Killing my current pod, does trigger the same process I described, it pulled the latest image (which is still the same version) and starts again. |
Some directions:
|
Here the sizes already:
|
You should be using
https://github.com/louislam/uptime-kuma/releases/tag/1.23.0 In your case, you should:
|
Thanks @louislam, this does explain my issue and thanks to your explanation, I could restore all my previous data. After adding the trailing ❤️ |
🛡️ Security Policy
Description
There are similar issues but close and I just ran into the problem.
Similar issue: #2778
I would have preferred the app to stock and ask than reset everything.
This also leads to a dead end for the user since:
Deprecated: Since a lot of features were added and this backup feature is a bit unmaintained, it cannot generate or restore a complete backup.
)👟 Reproduction steps
Stop old version
Start new version
👀 Expected behavior
The DB is not wiped
😓 Actual Behavior
The DB is wiped/reset
🐻 Uptime-Kuma Version
1.23.1
💻 Operating System and Arch
K8S
🌐 Browser
brave
🐋 Docker Version
K8S 1.27.2
🟩 NodeJS Version
n/a
📝 Relevant log output
No response
The text was updated successfully, but these errors were encountered: