Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When job fails because of concurrent update error, it does not respect max retries set by the job. Problem is that
perform
method logic that handles re-try is never called, becauserunjob
in controller that triggers jobs, catches expected exception and silences it. Though it is done to not pollute logs.So for now, adding extra check before job is run, to make sure max retries are handled if it reached it.
Some context:
It looks like code that supposed to handle max retries, is never called. But I am not sure what would be the right way to trigger up exception as there is some logic in here
queue/queue_job/controllers/main.py
Line 125 in e2c6bab
Not having max retries can be very problematic if your jobs can have many concurrent updates. Had some issue where somehow same job record (yes job record itself, not some other records, job would update) was being updated by two job runners at the same time and it would always fail and re-try. It had over 400 re-tries. And the only way to stop it, was to restart odoo.
For example, without this fix we can end up in situation like this: