-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: 2nd attempt to split save_event_transaction into 3 queues #78889
Conversation
Codecov ReportAttention: Patch coverage is ✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## master #78889 +/- ##
==========================================
- Coverage 78.27% 78.25% -0.02%
==========================================
Files 7122 7125 +3
Lines 313583 313629 +46
Branches 51178 51189 +11
==========================================
- Hits 245452 245438 -14
- Misses 61712 61759 +47
- Partials 6419 6432 +13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Using server.conf
to minimize the number of places is one suggestion I had.
src/sentry/queue/routers.py
Outdated
if SiloMode.get_current_mode() == SiloMode.CONTROL: | ||
split_task_routes = {} | ||
else: | ||
split_task_routes = settings.CELERY_SPLIT_QUEUE_TASK_ROUTES_REGION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you wanted to avoid this you could set settings.CELERY_SPLIT_QUEUE_TASK_ROUTES
in server.py
in the existing conditional blocks based on the silo mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a place in the code base where we can validate the config and prevent a pod from starting with an invalid config?
My mistake when I implemented the router was to conclude Celery would instantiate the router at Sentry initialization https://github.com/getsentry/sentry/blob/master/src/sentry/celery.py#L127-L129 .
Though it seems that happened lazily thus we would see failures only when we would try to send messages.
I think we should make celery fail earlier if there is a bad configuration.
Consider creating the queues during Celery initialization as a follow up.
Retry #78868, this time with separate config
for region and control silo