-
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backup issue when continuously inserting rows. #256
Comments
Hi @goelsatyam, thanks for reporting this! Can you please give me the logs of init container by running:
Also, I think will be useful the logs of xtrabackup tool from the pod from where the backup was taken:
Note: make sure to strip sensitive data from logs if the case. Thanks in advance! |
Logs of init container:-
|
I think that those logs are not from the first fail, those are the logs after a failed happened before and leave behind some files and it does not reflect the real issue. Maybe the operator should run a cleanup before doing a backup restore (but this is a different issue). |
I think is problem is while taking the backup. Because when i am took the backup without doing any queries in parallel, 2gb data backup size was .35GB and for 200GB data backup size was 36GB. In both cases i was able to successfully restore the cluster from the backup file of both 2GB and 200GB. |
That may be a problem of Do you still have the logs of the container where the backup was taken, to examine them? |
logs of the container where the backup was taken:- Create rclone.conf file. |
@AMecea The table in which i was inserting the rows didn't had any primary key. |
@AMecea Getting the same error when I tried it with table having primary key. |
Do you have a script to reproduce this? I need to reproduce it to inspect closely the logs and to tweak some parameters to find a solution. Also, I want to write an e2e test on this case. |
You can use this python script:-
I ran this script from two different shells at same time. |
Thanks for the script, I ran it and I was able to reproduce the error. The logs that I was looking for are those:
Because the writes are too many and the innodb log files are too small the data from the log file is flushed before Xtraackup finishes copying files. What you can do is to increase the This is a valid bug the backup should fail but instead succeeds. |
@AMecea Where i can see the xtrabackup logs? |
The Xtrabackup logs are in the sidecar container of the node from where the backup was taken, and you can see them by running:
|
If i got unexpected high number of insert requests and |
Yes, we can do that, we can issue a PR #260 will fix the issue of having corrupt backups with success status. |
While doing insert queries to mysql cluster, i tried to take backup in parallel. For 2gb data backup size was 43MB and for 200GB data backup size was 33MB. And when i tried to restore the cluster from the backup file it got stuck with STATUS: Init:1/2.
The text was updated successfully, but these errors were encountered: