Skip to content

Conversation

@sentrivana
Copy link
Contributor

@sentrivana sentrivana commented May 16, 2025

When using the RedBeatScheduler, we're sending an extra in-progress check-in at scheduler start. Since this is never followed by an ok or error check-in, the check-in is marked as timed out in Sentry.

We're patching the scheduler's maybe_due, which (as the name implies) might not end up executing the task. This is indeed what seems to be happening -- maybe_due is run when the scheduler starts, but without scheduling the task. We don't check whether maybe_due actually ended up scheduling anything and always fire an in-progress check-in.

Patching the scheduler's apply_async instead.

Closes #4392

@codecov
Copy link

codecov bot commented May 16, 2025

❌ 28 Tests Failed:

Tests completed Failed Passed Skipped
2000 28 1972 18
View the top 3 failed test(s) by shortest run time
tests.integrations.aws_lambda.test_aws_lambda::test_headers[no headers]
Stack Traces | 0s run time
.../integrations/aws_lambda/test_aws_lambda.py:70: in test_environment
    LocalLambdaStack.wait_for_stack()
.../integrations/aws_lambda/utils.py:221: in wait_for_stack
    raise TimeoutError(
E   TimeoutError: AWS SAM failed to start within 60 seconds. (Maybe Docker is not running?)
tests.integrations.aws_lambda.test_aws_lambda::test_timeout_error
Stack Traces | 0s run time
.../integrations/aws_lambda/test_aws_lambda.py:70: in test_environment
    LocalLambdaStack.wait_for_stack()
.../integrations/aws_lambda/utils.py:221: in wait_for_stack
    raise TimeoutError(
E   TimeoutError: AWS SAM failed to start within 60 seconds. (Maybe Docker is not running?)
tests.integrations.aws_lambda.test_aws_lambda::test_trace_continuation
Stack Traces | 0s run time
.../integrations/aws_lambda/test_aws_lambda.py:70: in test_environment
    LocalLambdaStack.wait_for_stack()
.../integrations/aws_lambda/utils.py:221: in wait_for_stack
    raise TimeoutError(
E   TimeoutError: AWS SAM failed to start within 60 seconds. (Maybe Docker is not running?)

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Copy link
Member

@sl0thentr0py sl0thentr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks legit, don't know all underlying details

Copy link
Contributor

@antonpirker antonpirker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Tested it with my example project and it does not what it should. (which is having only one check-in for in_progress and ok)

@sentrivana sentrivana merged commit 2f97cda into master May 19, 2025
134 of 137 checks passed
@sentrivana sentrivana deleted the ivana/fix-redbeat-extra-checkin branch May 19, 2025 08:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CeleryIntegration with Redbeat scheduler seems to send 2 In Progress Check ins leading 1 to Timeout

4 participants