-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
feat(files_trashbin): Refactor expire background job to support parallel run #52825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(files_trashbin): Refactor expire background job to support parallel run #52825
Conversation
2a3426f to
f8acd91
Compare
|
|
What will happen with the remaining files in the user3's trashbin? |
How would that happen ? |
MySQL has gone away... or db deadlock... storage issues or it is killed by any other means like oomkiller... I know this indicates some kind of infrastructure error still we need to account for it. |
f8acd91 to
45a0834
Compare
All of those should be caught by the Worth case, like the job being killed, less than 10 users would be skipped, and they will be handled by the next round of jobs. |
|
Lets see... |
61b30ed to
ed57aa2
Compare
ed57aa2 to
48e6568
Compare
2daee64 to
ebe5e39
Compare
Signed-off-by: Louis Chemineau <louis@chmn.me>
f996f08 to
2b8c7c4
Compare
8ec0e54 to
87ff4a1
Compare
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <louis@chmn.me>
87ff4a1 to
9b5d118
Compare
|
/backport to stable32 |
|
/backport to stable31 |
|
/backport to stable30 |
|
The backport to # Switch to the target branch and update it
git checkout stable31
git pull origin stable31
# Create the new backport branch
git checkout -b backport/52825/stable31
# Cherry pick the change from the commit sha1 of the change against the default branch
# This might cause conflicts, resolve them
git cherry-pick 1d91e40f 9b5d1184
# Push the cherry pick commit to the remote repository and open a pull request
git push origin backport/52825/stable31Error: Failed to create pull request: Validation Failed: {"resource":"PullRequest","code":"custom","message":"A pull request already exists for nextcloud:backport/52825/stable31."} - https://docs.github.com/rest/pulls/pulls#create-a-pull-request Learn more about backports at https://docs.nextcloud.com/server/stable/go.php?to=developer-backports. |
|
The backport to # Switch to the target branch and update it
git checkout stable30
git pull origin stable30
# Create the new backport branch
git checkout -b backport/52825/stable30
# Cherry pick the change from the commit sha1 of the change against the default branch
# This might cause conflicts, resolve them
git cherry-pick 1d91e40f 9b5d1184
# Push the cherry pick commit to the remote repository and open a pull request
git push origin backport/52825/stable30Error: Failed to create pull request: Validation Failed: {"resource":"PullRequest","code":"custom","message":"A pull request already exists for nextcloud:backport/52825/stable30."} - https://docs.github.com/rest/pulls/pulls#create-a-pull-request Learn more about backports at https://docs.nextcloud.com/server/stable/go.php?to=developer-backports. |
ExpireTrashjob to 30 minutes #51600The original PR introduced the possibility to continue an
ExpireTrashjob by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.
This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.