-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BR: Restart backup when backup job/pod unexpected failed by k8s #4895
BR: Restart backup when backup job/pod unexpected failed by k8s #4895
Conversation
…pingcap/tidb-operator into support-snapshot-backup-restart
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #4895 +/- ##
==========================================
+ Coverage 59.45% 67.91% +8.46%
==========================================
Files 227 231 +4
Lines 25828 29188 +3360
==========================================
+ Hits 15355 19824 +4469
+ Misses 9014 7845 -1169
- Partials 1459 1519 +60
|
i will refine code later |
/test pull-e2e-kind pull-e2e-kind-across-kubernetes pull-e2e-kind-tikv-scale-simultaneously pull-e2e-kind-tngm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM too, but i am still neebee for operator
@Ehco1996: Thanks for your review. The bot only counts approvals from reviewers and higher roles in list, but you're still welcome to leave your comments. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
Thanks |
/test pull-e2e-kind-across-kubernetes pull-e2e-kind-tikv-scale-simultaneously |
/test pull-e2e-kind-across-kubernetes |
…cap#4895) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * init code * update crd reference * fix miss update retry count * add retry limit as constant * init runnable code * refine main controller logic * add some note * address some comments * init e2e test code * add e2e env to extend backup time * add e2e env for test * fix complie * just test kill pod * refine logic * use pkill to kill pod * fix reconcile * add kill pod log * add more log * add more log * try kill pod only * wait and kill running backup pod * add wait for pod failed * fix wait pod running * use killall backup to kill pod * use pkill -9 backup * kill pod until pod is failed * add ps to debug * connect commands by semicolon * kill pod by signal 15 * use panic simulate kill pod * test all kill pod test * remove useless log * add original reason of job or pod failure * rename BackupRetryFailed to BackupRetryTheFailed
…cap#4895) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * init code * update crd reference * fix miss update retry count * add retry limit as constant * init runnable code * refine main controller logic * add some note * address some comments * init e2e test code * add e2e env to extend backup time * add e2e env for test * fix complie * just test kill pod * refine logic * use pkill to kill pod * fix reconcile * add kill pod log * add more log * add more log * try kill pod only * wait and kill running backup pod * add wait for pod failed * fix wait pod running * use killall backup to kill pod * use pkill -9 backup * kill pod until pod is failed * add ps to debug * connect commands by semicolon * kill pod by signal 15 * use panic simulate kill pod * test all kill pod test * remove useless log * add original reason of job or pod failure * rename BackupRetryFailed to BackupRetryTheFailed
* feat: support tiflash backup and restore during volume snapshot (#4812) * feat: calc the backup size from snapshot storage usage (#4819) * fix backup failed when pod was auto restarted by k8s (#4883) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * br: ensure pvc names sequential for ebs restore (#4888) * BR: Restart backup when backup job/pod unexpected failed by k8s (#4895) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * init code * update crd reference * fix miss update retry count * add retry limit as constant * init runnable code * refine main controller logic * add some note * address some comments * init e2e test code * add e2e env to extend backup time * add e2e env for test * fix complie * just test kill pod * refine logic * use pkill to kill pod * fix reconcile * add kill pod log * add more log * add more log * try kill pod only * wait and kill running backup pod * add wait for pod failed * fix wait pod running * use killall backup to kill pod * use pkill -9 backup * kill pod until pod is failed * add ps to debug * connect commands by semicolon * kill pod by signal 15 * use panic simulate kill pod * test all kill pod test * remove useless log * add original reason of job or pod failure * rename BackupRetryFailed to BackupRetryTheFailed * BR: Auto truncate log backup in backup schedule (#4904) * init schedule log backup code * add run log backup code * update api * refine some nodes * refine cacluate logic * add ut * fix make check * add log backup test * refine code * fix notes * refine function names * fix conflict * fix: add a new check for encryption during the volume snapshot restore (#4914) * br: volume-snapshot may lead to a panic when there is no block change between two snapshot (#4922) * br: refine BackoffRetryPolicy time format (#4925) * refine BackoffRetryPolicy time format * fix some ut --------- Co-authored-by: fengou1 <85682690+fengou1@users.noreply.github.com> Co-authored-by: WangLe1321 <wangle1321@163.com>
What problem does this PR solve?
Closes #4805
What is changed and how does it work?
add check pod/job failed in reconcile, and record retry mark in backup cr.
Code changes
Tests
Manual test for exceed maxRetryTimes:
Exponential Backoff Set is :
Manual test for timeout:
Exponential Backoff Set is :
Side effects
Related changes
Release Notes
Please refer to Release Notes Language Style Guide before writing the release note.