You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yesterday we got reports that the Redis DB in one of our applications was out of memory:
Redis::CommandError: OOM command not allowed when used memory > 'maxmemory'.
I tried to visit the Sidekiq Web dashboard to check what was being enqueued, but since Redis was down I couldn't access the dashboard easily because I need to login first, and for that I need to access Redis. I upgraded Redis (we use Heroku and their addons) ti have more memory, and finally managed to visit the Sidekiq Web dashboard. Turns out, some queues were not being processed at all.
Nothing fancy here, except that this old configuration file was outdated and was missing new queues we added. Since these queues did not appear in the config file, they were never processed, so the job information piled up until it filled up our Redis store. Once I found out this was the problem, I fixed the config file with the correct queues and everything's working fine now. But this problem could happen again. Since we rely on an external library for most of the changes, we missed those new q ueues so we didn't update the config file. But this could happen to any other team, people might forget to update the config.
(Actually, this sudden processing could cause other problems: a lot of meails being processed suddenly, outdated notifications being sent...)
I've looked at the config documentation and it's not mentioned that this is prossible, and in truth I haven't tried it. The ["*", 1] part in the suggested file would catch any other non-specified queue and process it in a priorization of 1, in this example. This would prevent the problem I got.
If this is already possible, then documentation might need to be updated? I couldn't find this anywhere 😞
The text was updated successfully, but these errors were encountered:
@mperham thanks for the answer. Is there any way to overcome the problem I explained? (Apart from keeping an eye and making sure the config is up-to-date)
Ruby version: 2.7.2
Rails version: 5.2
Sidekiq / Pro / Enterprise version(s): Sidekiq 6.0.7
I'm in charge of some Rails applications which mostly don't have any code, they just rely on an external library (https://github.com/decidim/decidim) to handle everything.
Yesterday we got reports that the Redis DB in one of our applications was out of memory:
I tried to visit the Sidekiq Web dashboard to check what was being enqueued, but since Redis was down I couldn't access the dashboard easily because I need to login first, and for that I need to access Redis. I upgraded Redis (we use Heroku and their addons) ti have more memory, and finally managed to visit the Sidekiq Web dashboard. Turns out, some queues were not being processed at all.
My original
config/sidekiq.yml
file:Nothing fancy here, except that this old configuration file was outdated and was missing new queues we added. Since these queues did not appear in the config file, they were never processed, so the job information piled up until it filled up our Redis store. Once I found out this was the problem, I fixed the config file with the correct queues and everything's working fine now. But this problem could happen again. Since we rely on an external library for most of the changes, we missed those new q ueues so we didn't update the config file. But this could happen to any other team, people might forget to update the config.
(Actually, this sudden processing could cause other problems: a lot of meails being processed suddenly, outdated notifications being sent...)
Once way to fix this would be allowing for a wildcard queue so that everything could be processed. What about something like this? (Notice I change to priorization as per https://github.com/mperham/sidekiq/wiki/Advanced-Options#queues)
I've looked at the config documentation and it's not mentioned that this is prossible, and in truth I haven't tried it. The
["*", 1]
part in the suggested file would catch any other non-specified queue and process it in a priorization of 1, in this example. This would prevent the problem I got.If this is already possible, then documentation might need to be updated? I couldn't find this anywhere 😞
The text was updated successfully, but these errors were encountered: