-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: calc the backup size from snapshot storage usage #4819
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #4819 +/- ##
==========================================
+ Coverage 59.93% 61.29% +1.35%
==========================================
Files 224 230 +6
Lines 25393 28701 +3308
==========================================
+ Hits 15220 17591 +2371
- Misses 8727 9582 +855
- Partials 1446 1528 +82
|
…y return vol snap backup
/test pull-e2e-kind-tikv-scale-simultaneously |
/test pull-e2e-kind-across-kubernetes |
/test pull-e2e-kind |
/test pull-e2e-kind-br |
1 similar comment
/test pull-e2e-kind-br |
/test pull-e2e-kind-tikv-scale-simultaneously |
/test pull-e2e-kind-serial |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: e963c3d
|
/test pull-e2e-kind-tngm |
/test pull-e2e-kind-tikv-scale-simultaneously |
/test pull-e2e-kind-br |
/test pull-e2e-kind-across-kubernetes |
/test pull-e2e-kind |
1 similar comment
/test pull-e2e-kind |
/test pull-e2e-kind |
/merge |
/test pull-e2e-kind |
2 similar comments
/test pull-e2e-kind |
/test pull-e2e-kind |
@fengou1: Your PR was out of date, I have automatically updated it for you. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
/test pull-e2e-kind-across-kubernetes |
/test pull-e2e-kind-serial |
/test pull-e2e-kind-basic |
/test pull-e2e-kind |
/test pull-e2e-kind-basic |
2 similar comments
/test pull-e2e-kind-basic |
/test pull-e2e-kind-basic |
* feat: support tiflash backup and restore during volume snapshot (#4812) * feat: calc the backup size from snapshot storage usage (#4819) * fix backup failed when pod was auto restarted by k8s (#4883) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * br: ensure pvc names sequential for ebs restore (#4888) * BR: Restart backup when backup job/pod unexpected failed by k8s (#4895) * init code for test * just clean before backup data * delete test code * import pingcap/errors * add check version * remove test code * add running status check * add restart condition to clarify logic * fix status update * fix ut * init code * update crd reference * fix miss update retry count * add retry limit as constant * init runnable code * refine main controller logic * add some note * address some comments * init e2e test code * add e2e env to extend backup time * add e2e env for test * fix complie * just test kill pod * refine logic * use pkill to kill pod * fix reconcile * add kill pod log * add more log * add more log * try kill pod only * wait and kill running backup pod * add wait for pod failed * fix wait pod running * use killall backup to kill pod * use pkill -9 backup * kill pod until pod is failed * add ps to debug * connect commands by semicolon * kill pod by signal 15 * use panic simulate kill pod * test all kill pod test * remove useless log * add original reason of job or pod failure * rename BackupRetryFailed to BackupRetryTheFailed * BR: Auto truncate log backup in backup schedule (#4904) * init schedule log backup code * add run log backup code * update api * refine some nodes * refine cacluate logic * add ut * fix make check * add log backup test * refine code * fix notes * refine function names * fix conflict * fix: add a new check for encryption during the volume snapshot restore (#4914) * br: volume-snapshot may lead to a panic when there is no block change between two snapshot (#4922) * br: refine BackoffRetryPolicy time format (#4925) * refine BackoffRetryPolicy time format * fix some ut --------- Co-authored-by: fengou1 <85682690+fengou1@users.noreply.github.com> Co-authored-by: WangLe1321 <wangle1321@163.com>
What problem does this PR solve?
volume-snapshot backup shows a total volume size, not a backup snapshot size
Closes #4849
What is changed and how does it work?
Full Backup snapshot size
Incremental snapshot backup size
delete snapshot
Code changes
Tests
Snapshot Deletion
For first backup, check disk usage. It is almost the same size as a backsize 18 GB
For second backup, before gc happened, it was almost the same size as a backsize 56 GB
2. Delete bk-108
[图片]
[root@kvm-dev snapshot]# kubectl delete bk bk-100 -ntidb-cluster
backup.pingcap.com "bk-100" deleted
Backup ongoing with backup deletion
Scenario#1
[ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
Scenario#2
[ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
Scenario#3
[ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
Notice:
Refer
Side effects
Related changes
Release Notes
Please refer to Release Notes Language Style Guide before writing the release note.