Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: calc the backup size from snapshot storage usage #4819

Merged
merged 15 commits into from
Feb 3, 2023

Conversation

fengou1
Copy link
Contributor

@fengou1 fengou1 commented Jan 3, 2023

What problem does this PR solve?

volume-snapshot backup shows a total volume size, not a backup snapshot size

  1. Using EBS Direct API can fetch snapshot storage usage (Done)
  2. Volume snapshot backup size aligns with AWS snapshot fees

Closes #4849

What is changed and how does it work?

  • Full Backup snapshot size

  • Incremental snapshot backup size

    • backupSize=sum(ListChangedBlocks(vol1preSnapshot, vol1CurSnapshot) + ListChangedBlocks(vol2preSnapshot, vol2CurSnapshot) ...)
  • delete snapshot

    • calc next backup size and update it

Code changes

  • Has Go code change
  • Has CI related scripts change

Tests

  • Unit test
  • E2E test
  • Manual test

Snapshot Deletion

  1. Take 10 snapshots
    image

For first backup, check disk usage. It is almost the same size as a backsize 18 GB
For second backup, before gc happened, it was almost the same size as a backsize 56 GB
2. Delete bk-108
image

  1. Delete bk-101, recalc all backups (require a code change to calc all backup size)
    [图片]
  2. Delete first backup bk-100
    [root@kvm-dev snapshot]# kubectl delete bk bk-100 -ntidb-cluster
    backup.pingcap.com "bk-100" deleted
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   241 GB       438522914072690691                      21h
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   182 GB       438523004997337096                      21h
bk-104   full   volume-snapshot   Complete   s3://ebs-west-2/snap_104   105 GB       438523044646617096                      21h
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   138 GB       438523096354258949                      21h
bk-106   full   volume-snapshot   Complete   s3://ebs-west-2/snap_106   264 GB       438523194330841094                      21h
bk-108   full   volume-snapshot   Complete   s3://ebs-west-2/snap_108   52 GB        438523298755641346                      21h
bk-109   full   volume-snapshot   Complete   s3://ebs-west-2/snap_109   32 MB        438523729117708289                      20h
  1. Delete last backup bk-109
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   241 GB       438522914072690691                      22h
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   182 GB       438523004997337096                      21h
bk-104   full   volume-snapshot   Complete   s3://ebs-west-2/snap_104   105 GB       438523044646617096                      21h
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   138 GB       438523096354258949                      21h
bk-106   full   volume-snapshot   Complete   s3://ebs-west-2/snap_106   264 GB       438523194330841094                      21h
bk-108   full   volume-snapshot   Complete   s3://ebs-west-2/snap_108   52 GB        438523298755641346                      21h
  1. Delete multi-backup once
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   389 GB       438523004997337096                      21h
bk-104   full   volume-snapshot   Complete   s3://ebs-west-2/snap_104   105 GB       438523044646617096                      21h
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   138 GB       438523096354258949                      21h
bk-106   full   volume-snapshot   Complete   s3://ebs-west-2/snap_106   264 GB       438523194330841094                      21h
[root@kvm-dev snapshot]# kubectl delete bk-103 bk-104 bk-106 -ntidb-cluster
error: the server doesn't have a resource type "bk-103"
[root@kvm-dev snapshot]# kubectl delete bk bk-103 bk-104 bk-106 -ntidb-cluster                                                                                                                                                           
backup.pingcap.com "bk-103" deleted
backup.pingcap.com "bk-104" deleted
backup.pingcap.com "bk-106" deleted
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   575 GB       438523096354258949                      21h
  1. Delete only one backup
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   575 GB       438523096354258949                      21h
[root@kvm-dev snapshot]# kubectl delete bk bk-105 -ntidb-cluster
backup.pingcap.com "bk-105" deleted
[root@kvm-dev snapshot]# kubectl get bk -ntidb-clsuter
No resources found in tidb-clsuter namespace.

Backup ongoing with backup deletion
Scenario#1

  1. Take volume snapshot of cluster
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      8m35s
bk-101   full   volume-snapshot   Complete   s3://ebs-west-2/snap_101   21 GB        438559114960044033                      5m41s
bk-102   full   volume-snapshot   Running    s3://ebs-west-2/snap_102                                                        3m52s
bk-103   full   volume-snapshot   Running    s3://ebs-west-2/snap_103                                                        2m42s
bk-104   full   volume-snapshot   Running    s3://ebs-west-2/snap_104 
  1. Luanch tpcc
    [ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
  2. Delete the ongoing base snapshot, wait next backup is ongoing with a snapshot percentage > 0
[root@kvm-dev snapshot]# kubectl delete bk bk-101 -ntidb-cluster
backup.pingcap.com "bk-101" deleted
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      9m20s
bk-102   full   volume-snapshot   Running    s3://ebs-west-2/snap_102                                                        4m37s
bk-103   full   volume-snapshot   Running    s3://ebs-west-2/snap_103                                                        3m27s
bk-104   full   volume-snapshot   Running    s3://ebs-west-2/snap_104                                                        2m12s
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      9m57s
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   61 GB        438559143298072582                      5m14s
bk-103   full   volume-snapshot   Running    s3://ebs-west-2/snap_103                                                        4m4s
bk-104   full   volume-snapshot   Running    s3://ebs-west-2/snap_104
  1. Launch more backups, wait to done
  2. Stop tpcc
  3. Launch the last backup
  4. Wait for the last backup to complete
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      16m
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   61 GB        438559143298072582                      11m
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   34 GB        438559161726533637                      10m
bk-104   full   volume-snapshot   Complete   s3://ebs-west-2/snap_104   27 GB        438559181872562177                      9m
  1. Launch a new backup, wait snapshot is ongoing with percentage > 0 and then delete the last backup
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      24m
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   61 GB        438559143298072582                      19m
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   34 GB        438559161726533637                      18m
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   27 GB        438559430791397377                      78s

Scenario#2

  1. Luanch tpcc, and then take some backups
    [ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
  2. Stop tpcc, wait for all backup to complete
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      46m
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   61 GB        438559143298072582                      41m
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   34 GB        438559161726533637                      40m
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   27 GB        438559430791397377                      23m
  1. Luanch tpcc again, wait for some minutes, and then take snapshots. wait snapshot is ongoing with percentage > 0 and then delete all previous backups.
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-100   full   volume-snapshot   Complete   s3://ebs-west-2/snap_100   18 GB        438559073751269377                      46m
bk-102   full   volume-snapshot   Complete   s3://ebs-west-2/snap_102   61 GB        438559143298072582                      41m
bk-103   full   volume-snapshot   Complete   s3://ebs-west-2/snap_103   34 GB        438559161726533637                      40m
bk-105   full   volume-snapshot   Complete   s3://ebs-west-2/snap_105   27 GB        438559430791397377                      23m
[root@kvm-dev snapshot]# vi backup.yaml 
[root@kvm-dev snapshot]# kubectl apply -f backup.yaml -ntidb-cluster
backup.pingcap.com/bk-106 created
[root@kvm-dev snapshot]# kubectl delete bk bk-100 bk-102 bk-103 bk-105 -ntidb-cluster
backup.pingcap.com "bk-100" deleted
backup.pingcap.com "bk-102" deleted
backup.pingcap.com "bk-103" deleted
backup.pingcap.com "bk-105" deleted
4. Wait for the backup to finished and check the backup size
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-106   full   volume-snapshot   Complete   s3://ebs-west-2/snap_106   201 GB       438559871219269634                      3m40s

Scenario#3

  1. Use tpcc to load some data
    [ec2-user@ip-192-168-12-238 ~]$ tiup bench tpcc -H a8ba05c1226e74f7ab3d1cd88053e0bc-78ca7f679ece5bc9.elb.us-west-2.amazonaws.com -T 8 -P 4000 -D tpcc --warehouses 6000 prepare
  2. Take snapshot backup of cluster
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-106   full   volume-snapshot   Complete   s3://ebs-west-2/snap_106   201 GB       438559871219269634                      3m40s
  1. Delete backups
[root@kvm-dev snapshot]# kubectl delete bk bk-106 -ntidb-cluster
backup.pingcap.com "bk-106" deleted
  1. Take snapshot backup of cluster again, wait for snapshot to complete
[root@kvm-dev snapshot]# kubectl apply -f backup.yaml -ntidb-cluster
backup.pingcap.com/bk-107 created

[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-107   full   volume-snapshot   Complete   s3://ebs-west-2/snap_107   250 GB       438559993627410433                      5m44s

Notice:

  1. Backup is still incremental in Scenario#3
I0106 02:56:17.913201       9 backup.go:262] [2023/01/06 02:56:17.913 +00:00] [INFO] [backup_ebs.go:221] ["async snapshots finished."]
I0106 02:56:17.913302       9 backup.go:262] [2023/01/06 02:56:17.913 +00:00] [INFO] [progress.go:160] [progress] [step=backup] [progress=100.00%] [count="300 / 300"] [speed="6 p/s"] [elapsed=2m47s] [remaining=2m47s]
  1. backup is complete
[root@kvm-dev snapshot]# kubectl get bk -ntidb-cluster
NAME     TYPE   MODE              STATUS     BACKUPPATH                 BACKUPSIZE   COMMITTS             LOGTRUNCATEUNTIL   AGE
bk-107   full   volume-snapshot   Complete   s3://ebs-west-2/snap_107   250 GB       438559993627410433                      5m44s
Logs
I0106 02:56:18.308034       9 util.go:527] the snapshot snap-0fbc3d27637e59fec created for volume vol-071caa1d6db4d1e02
I0106 02:56:18.308051       9 util.go:527] the snapshot snap-0531d91c2fab3c47f created for volume vol-0a73fe9974fd99bac
I0106 02:56:18.308057       9 util.go:527] the snapshot snap-07f53e86f0272245e created for volume vol-006bcb13d9088f24f
I0106 02:56:20.494551       9 util.go:634] full backup snapshot num block 156369, block size 524288
I0106 02:56:22.718710       9 util.go:634] full backup snapshot num block 158039, block size 524288
I0106 02:56:24.957606       9 util.go:634] full backup snapshot num block 161736, block size 524288
I0106 02:56:24.957628       9 util.go:596] backup size 249636585472 bytes
I0106 02:56:24.970369       9 backup_status_updater.go:110] Backup: [tidb-cluster/bk-107] updated successfully

Refer

  1. Delete an Amazon EBS snapshot
  2. Amazon Elastic Block Store volumes and snapshots
  3. How incremental snapshots work
  • No code

Side effects

  • Breaking backward compatibility
  • Other side effects:

Related changes

  • Need to cherry-pick to the release branch
  • Need to update the documentation

Release Notes

Please refer to Release Notes Language Style Guide before writing the release note.


@ti-chi-bot
Copy link
Member

ti-chi-bot commented Jan 3, 2023

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • WangLe1321
  • WizardXiao

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@codecov-commenter
Copy link

codecov-commenter commented Jan 3, 2023

Codecov Report

Merging #4819 (c8f83d4) into master (0f9c997) will increase coverage by 1.35%.
The diff coverage is 0.77%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4819      +/-   ##
==========================================
+ Coverage   59.93%   61.29%   +1.35%     
==========================================
  Files         224      230       +6     
  Lines       25393    28701    +3308     
==========================================
+ Hits        15220    17591    +2371     
- Misses       8727     9582     +855     
- Partials     1446     1528      +82     
Flag Coverage Δ
e2e 25.99% <2.04%> (?)
unittest 59.36% <0.40%> (-0.58%) ⬇️

cmd/backup-manager/app/clean/manager.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/backup/manager.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/clean/manager.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/util/util.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/util/util.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/util/util.go Outdated Show resolved Hide resolved
cmd/backup-manager/app/util/util.go Outdated Show resolved Hide resolved
@fengou1 fengou1 self-assigned this Jan 31, 2023
@fengou1
Copy link
Contributor Author

fengou1 commented Jan 31, 2023

/test pull-e2e-kind-tikv-scale-simultaneously

@fengou1
Copy link
Contributor Author

fengou1 commented Jan 31, 2023

/test pull-e2e-kind-across-kubernetes

@fengou1
Copy link
Contributor Author

fengou1 commented Jan 31, 2023

/test pull-e2e-kind

@fengou1
Copy link
Contributor Author

fengou1 commented Jan 31, 2023

/test pull-e2e-kind-br

1 similar comment
@fengou1
Copy link
Contributor Author

fengou1 commented Jan 31, 2023

/test pull-e2e-kind-br

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 1, 2023

/test pull-e2e-kind-tikv-scale-simultaneously

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 1, 2023

/test pull-e2e-kind-serial

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 1, 2023

/merge

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: e963c3d

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-tngm

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-tikv-scale-simultaneously

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-br

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-across-kubernetes

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

1 similar comment
@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/merge

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

2 similar comments
@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

@ti-chi-bot
Copy link
Member

@fengou1: Your PR was out of date, I have automatically updated it for you.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-across-kubernetes

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-serial

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind-basic

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 2, 2023

/test pull-e2e-kind

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 3, 2023

/test pull-e2e-kind-basic

2 similar comments
@fengou1
Copy link
Contributor Author

fengou1 commented Feb 3, 2023

/test pull-e2e-kind-basic

@fengou1
Copy link
Contributor Author

fengou1 commented Feb 3, 2023

/test pull-e2e-kind-basic

@ti-chi-bot ti-chi-bot merged commit d50f2f3 into pingcap:master Feb 3, 2023
WizardXiao pushed a commit to WizardXiao/tidb-operator that referenced this pull request Mar 10, 2023
WizardXiao added a commit that referenced this pull request Mar 11, 2023
* feat: support tiflash backup and restore during volume snapshot (#4812)

* feat: calc the backup size from snapshot storage usage (#4819)

* fix backup failed when pod was auto restarted by k8s (#4883)

* init code for test

* just clean before backup data

* delete test code

* import pingcap/errors

* add check version

* remove test code

* add running status check

* add restart condition to clarify logic

* fix status update

* fix ut

* br: ensure pvc names sequential for ebs restore (#4888)

* BR: Restart backup when backup job/pod unexpected failed by k8s (#4895)

* init code for test

* just clean before backup data

* delete test code

* import pingcap/errors

* add check version

* remove test code

* add running status check

* add restart condition to clarify logic

* fix status update

* fix ut

* init code

* update crd reference

* fix miss update retry count

* add retry limit as constant

* init runnable code

* refine main controller logic

* add some note

* address some comments

* init e2e test code

* add e2e env to extend backup time

* add e2e env for test

* fix complie

* just test kill pod

* refine logic

* use pkill to kill pod

* fix reconcile

* add kill pod log

* add more log

* add more log

* try kill pod only

* wait and kill running backup pod

* add wait for pod failed

* fix wait pod running

* use killall backup to kill pod

* use pkill -9 backup

* kill pod until pod is failed

* add ps to debug

* connect commands by semicolon

* kill pod by signal 15

* use panic simulate kill pod

* test all kill pod test

* remove useless log

* add original reason of job or pod failure

* rename BackupRetryFailed to BackupRetryTheFailed

* BR: Auto truncate log backup in backup schedule (#4904)

* init schedule log backup code

* add run log backup code

* update api

* refine some nodes

* refine cacluate logic

* add ut

* fix make check

* add log backup test

* refine code

* fix notes

* refine function names

* fix conflict

* fix: add a new check for encryption during the volume snapshot restore (#4914)

* br: volume-snapshot may lead to a panic when there is no block change between two snapshot (#4922)

* br: refine BackoffRetryPolicy time format (#4925)

* refine BackoffRetryPolicy time format

* fix some ut

---------

Co-authored-by: fengou1 <85682690+fengou1@users.noreply.github.com>
Co-authored-by: WangLe1321 <wangle1321@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

backup and restore: volume snapshot backup shall show a accurate backup size
5 participants