-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge the notifications and cleaning workers #8672
Conversation
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe pull request modifies the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
supervisord/utils.conf (1)
29-33
: LGTM! Worker consolidation looks correct.The merger of notification and cleaning workers is implemented correctly. The configuration maintains proper process management settings while combining the queues.
However, consider adding appropriate logging prefixes or tags to distinguish between notification and cleaning tasks in the logs for easier debugging.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
supervisord/utils.conf
(1 hunks)
🔇 Additional comments (1)
supervisord/utils.conf (1)
29-33
: Verify resource requirements for the combined worker.
While consolidating workers is efficient for low-traffic queues, please verify:
- The combined worker can handle peak loads when both queues are active
- The
ENV_NUMPROCS
setting is sufficient for the combined workload
✅ Verification successful
Based on the gathered information, I can now provide a final response:
The combined worker configuration appears safe to implement
The codebase analysis reveals that:
- Both notifications and cleaning queues are lightweight, primarily handling background cache cleanup tasks and notifications
- The cleaning tasks are scheduled with large timeouts (4h default) and run periodically
- There's no evidence of high-concurrency requirements or performance bottlenecks in either queue
The default ENV_NUMPROCS
setting will be sufficient as:
- Cleaning tasks are mostly I/O bound (file operations)
- Notifications are typically low-latency events
- Both queues don't show patterns of sustained high load
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check historical queue metrics to verify workload assumptions
# Look for any existing monitoring or metrics related to these queues
rg -A 5 "notifications.*queue|cleaning.*queue" --type py
# Check for any performance-related comments or issues
rg -A 5 "performance.*worker|worker.*performance" --type py
Length of output: 3697
Script:
#!/bin/bash
# Let's check for the actual worker implementations and queue configurations
# Check worker implementations and configurations
ast-grep --pattern 'class DefaultWorker'
# Look for queue configurations and settings
rg -l "rqworker|RQ_QUEUES|QUEUES" --type py
# Check for any existing worker process configurations
rg "numprocs|NUM_WORKERS|WORKERS" supervisord/
# Look for actual queue definitions and their usage
rg "notifications.*queue|cleaning.*queue" -B 5 -A 5
Length of output: 7279
Script:
#!/bin/bash
# Let's check the settings and queue configurations
# Look for queue settings in base configuration
rg -A 10 "QUEUES.*=|RQ_QUEUES.*=" cvat/settings/
# Check for any rate limiting or concurrency settings
rg -A 5 "CONCURRENT|RATE_LIMIT" cvat/settings/
# Look for the actual implementation of notification and cleaning tasks
ast-grep --pattern 'def $_(notifications|cleaning)'
Length of output: 1075
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #8672 +/- ##
========================================
Coverage 74.25% 74.25%
========================================
Files 401 401
Lines 43465 43465
Branches 3950 3950
========================================
Hits 32273 32273
Misses 11192 11192
|
The notifications queue is used very rarely (and not at all in the OSS version), so it doesn't make sense to keep a dedicated worker just for it. Combining the two workers in the utils container saves memory and startup time.
6bbdab6
to
6ff4b70
Compare
Quality Gate passedIssues Measures |
Motivation and context
The
notifications
queue is used very rarely (and not at all in the OSS version), so it doesn't make sense to keep a dedicated worker just for it. Combining the two workers in the utils container saves memory and startup time.How has this been tested?
Checklist
develop
branch[ ] I have updated the documentation accordingly[ ] I have added tests to cover my changes[ ] I have linked related issues (see GitHub docs)[ ] I have increased versions of npm packages if it is necessary(cvat-canvas,
cvat-core,
cvat-data and
cvat-ui)
License
Feel free to contact the maintainers if that's a concern.
Summary by CodeRabbit
New Features
Bug Fixes