-
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable backups for existing cluster #182
Comments
@AMecea I am not sure, but I think the scheduled backup it still not working right? I am using the latest operator. and confirmed the backup it self is working, but scheduling in the cluster seems not to work because the cron trigger is doing noting:
|
Indeed this seems to be a bug, This part is not tested enough. I will fix this issue ASAP. Thanks for reporting it! |
OK, one more thing: How are backups supposed to work? I just followed all the instructions. A I setup a 5-min-schedule for testing: |
The schedule is formed from six fields (so you should have something like And make sure that you are using the version |
OK, updated the schedule. Now waiting. Do I need to upgrade the clusters as well? I upgraded the helm chart which triggered a successful recreation of the operator but the cluster itself stayed untouched. |
Just the operator should be updated the cluster is fine. Maybe update the scheduler of the cluster if needed. |
It seems like it is not yet working :( cluster.yaml
cluster-backup.yaml
The secrets are also created how described in the README. A few more debugging:
Misconfiguration or bug? |
The config is fine. Can you show me some logs from the operator container, related to this? Does it work recurrent backups for a newly created cluster? |
Also while testing on my end I found a bug the cleanup does not delete the right backups, it's deleting the newly created backups instead of the old ones. So don't use I will reopen this issue while this is solved. |
Just found out that after the upgrade the whole setup is broken: Orchestrator keeps throwing error messages like:
Also the cluster lost its connection to the operator / orchestrator with:
while I never changed any passwords or secrets... |
I know this issue, you have to use To fix this you can scale the cluster to 0 (replicas) then back to normal. This will reupdate the orchestrator credentials. Please let me know if this works or you encounter any problems, |
OK thank you.. I already re-installed the mysql-operator. But now I get the following error during cluster init 🙈
I deleted the whole namespace, including volumes, secrets and config maps. |
reset our envars at each start of a new build it seems jenkins keeps values of envars in the case of a restarted build so we could get a result like: Status: changed, regression, unsuccessful, failure, changed, unsuccessful, unstable see e.g., status of the build trunk bitpoke#182 which restarted from build bitpoke#181 Change-Id: I17b125de5306835b0cdad406c671b30317b75960
Is there any way to enable backups for a existing cluster or do I need to recreate the cluster with the
backupSchedule
specs?At least the README is unclear at this point.
The text was updated successfully, but these errors were encountered: