Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] queue_job: max retry #622

Open
wants to merge 1 commit into
base: 15.0
Choose a base branch
from

Conversation

oerp-odoo
Copy link

@oerp-odoo oerp-odoo commented Jan 28, 2024

When job fails because of concurrent update error, it does not respect max retries set by the job. Problem is that perform method logic that handles re-try is never called, because runjob in controller that triggers jobs, catches expected exception and silences it. Though it is done to not pollute logs.

So for now, adding extra check before job is run, to make sure max retries are handled if it reached it.

Some context:

It looks like code that supposed to handle max retries, is never called. But I am not sure what would be the right way to trigger up exception as there is some logic in here

except RetryableJobError as err:
that explicitly not want to raise that exception.

Not having max retries can be very problematic if your jobs can have many concurrent updates. Had some issue where somehow same job record (yes job record itself, not some other records, job would update) was being updated by two job runners at the same time and it would always fail and re-try. It had over 400 re-tries. And the only way to stop it, was to restart odoo.

For example, without this fix we can end up in situation like this:

Selection_1050

When job fails because of concurrent update error, it does not respect
max retries set by the job. Problem is ``perform`` method logic that
handles re-try is never called, because ``runjob`` in controller that
triggers jobs, catches expected exception and silences it. Though it
is done to not pollute logs.

So for now, adding extra check before job is run, to make sure max
retries are handled if it reached it.
@OCA-git-bot
Copy link
Contributor

Hi @guewen,
some modules you are maintaining are being modified, check this out!

Copy link

github-actions bot commented Jun 2, 2024

There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this PR to never become stale, please ask a PSC member to apply the "no stale" label.

@github-actions github-actions bot added the stale PR/Issue without recent activity, it'll be soon closed automatically. label Jun 2, 2024
@oerp-odoo
Copy link
Author

@guewen can you check this?

Copy link
Contributor

@amh-mw amh-mw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable, but broad enough in scope that it should be covered by unit tests?

@github-actions github-actions bot removed the stale PR/Issue without recent activity, it'll be soon closed automatically. label Jun 9, 2024
Copy link

There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this PR to never become stale, please ask a PSC member to apply the "no stale" label.

@github-actions github-actions bot added the stale PR/Issue without recent activity, it'll be soon closed automatically. label Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale PR/Issue without recent activity, it'll be soon closed automatically.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants