Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Change retry logic to extract next run from traceback #111

Merged
merged 1 commit into from
Mar 15, 2023

Conversation

adinhodovic
Copy link
Contributor

task_meta.result is an Exception for me with Celery v5.2.5, I can't grasp where result.when would come from. The retry logic simply does not work as the result is an Exception and does not have any fields indicating when the next run is. Accessing when just crashes the progress bar and returns a 500 error.

Still kept it in and checked if it's iterable, maybe it works for others.

Here's a PR that works for me, if there's other solutions let me know. But atm retries are causing 500 errors.

`task_meta.result` is an Exception for me with Celery v5.2.5, I can't grasp where result.when would come from. The retry logic simply does not work as the result is an Exception and does not have any fields indicating when the next run is.

Still kept it in and checked if it's iterable, maybe it works for others.

Here's a PR that works for me, if there's other solutions let me know. But atm retries are causing 500 errors.
@czue czue requested review from EJH2 and OmarWKH March 14, 2023 11:47
@czue
Copy link
Owner

czue commented Mar 14, 2023

Thanks for the contribution!

I've never actually used the retry functionality. @EJH2 or @OmarWKH - do you have any thoughts on this?

Or @adinhodovic do you have a quick way to reproduce the scenario on the celery side so I can play with it?

@adinhodovic
Copy link
Contributor Author

adinhodovic commented Mar 14, 2023

Thanks for the contribution!

I've never actually used the retry functionality. @EJH2 or @OmarWKH - do you have any thoughts on this?

Or @adinhodovic do you have a quick way to reproduce the scenario on the celery side so I can play with it?

from celery import Task
from celery_progress.backend import ProgressRecorder

class BaseTask(Task):
    autoretry_for = (Exception,)
    max_retries = 5
    retry_backoff = 300

@app.task(
    base=BaseTask, bind=True
)
def my_test(self):
    progress_recorder = ProgressRecorder(self)
    progress_recorder.set_progress(0, 100, description="Starting")
    raise Exception("Test")

We run retries on all tasks so it becomes core. Internal server errors on any task that is retrying as the API crashes.

task_meta.result is an exception on retries, accessing when on the result just fails.

@czue czue mentioned this pull request Mar 15, 2023
@czue
Copy link
Owner

czue commented Mar 15, 2023

Thanks again for reporting and for the repro steps. I played with it and made some tweaks to your original PR in #112

Updates:

  • Always assume result is an exception for RETRY states
  • Remove seconds to date conversions (I ran into some timezone issues with this)

If you can confirm it works for you I'll close this and merge that one.

@czue czue merged commit 74e7725 into czue:master Mar 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants