-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queue consumers #23540
Comments
Hi @luckyraul. Thank you for your report.
Please make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, please, add a comment to the issue:
For more details, please, review the Magento Contributor Assistant documentation. @luckyraul do you confirm that you were able to reproduce the issue on vanilla Magento instance following steps to reproduce?
|
Hello @luckyraul ! Can you provide more steps how to reproduce it ? |
I don't do anything except running cron before and after release. Updated issue description |
You need to share the |
you cannot share var |
@luckyraul why not? I'm using capistrano with https://github.com/davidalger/capistrano-magento2 to deploy and i assure you that you can: |
@slackerzz: it's not really recommended to share the full The examples in the repo's README you reference also shares only specific individual files or directories within the
Can you tell us a bit more about those .pid files in the I'm also running against the problem of @luckyraul with our capistrano-style deployments and I'm wondering if those .pid files can help resolve that or not, but I don't fully understand how they work yet? Thanks! |
Hi @hostep, you're right, it's not ideal but i'm not using files for caching, i use Redis. When the consumer is started for the first time it creates a pid file in the magento When you deploy a new release and with a non shared Maybe it could be better to create a Another solution can be to stop consumers before deploying a new release. For example on Magento Cloud Commerce all cron processes are killed during a deploy. |
@slackerzz |
Out of curiosity: is it safe to kill these
I also noticed Magento came with something called a poison pill feature in Magento 2.3.2, but I don't know if that is relevant here and if we can use that somehow to tell the queues to stop and restart from a new release? |
I was suggesting that we can think to submit a pull request to change the pid file location to
You're right @hostep i didn't know about the PoisonPill, maybe this can resolve the last point of @luckyraul My "personal" solution was to add
to |
I can also easily reproduce it by just manually creating an export job with a lot of products. After that all other cron tasks get status So this is not only happening when doing a new release. Small exports are fine though.... |
I just came across the issue today and seen tasks from old releases keep running in memory. This has caused a memory issue and dev server has went irresponsive. Killed all the queue tasks and disabled them to avoid memory issue.
The memory is reclaimed now and no more issues post deployments. |
Be aware that certain tasks being executed in the backend will no longer work when you disable all the consumers. These are the things listed under Anyway, I've created a feature request for an easy-to-use action to stop running consumer processes. We could then use this during a deployment. If somebody has some ideas, feel free to contribute: magento/community-features#181 Until this feature is implemented, I'm currently solving it by reading the |
There are two problems with the current PoisonPill implementation:
|
@giacmir indeed! See https://github.com/magento/architecture/pull/232/files?short_path=81c5aa0#diff-81c5aa0b55a519b20c0ffd8b3f57b21b for a proposal for some new functionality in Magento for how to handle those. Especially "Problem 2" is relevant here. |
Same problem. Our server died because old consumers were running and spiked CPU load after we moved to a new release. Is consumers supposed to be running 24/7? "ps aux | grep consumers" have 4 consumers running at all times. |
@Zyles: yes, that's how it was designed, if you don't need the functionality provided by those consumers, you can disable all or some of them, see docs. Magento 2.3.4 came with a new option Therefore I proposed a different idea for doing the check on pending messages before the processes start up, see here, option |
The solution by @slackerzz sounds very reasonable to me. Those systemd services should be in the devdocs :) Starting the consumers via cron also seems to have the side-effect, that the first cron-run never returns |
Add integration tests coverage.
✅ Confirmed by @engcom-Oscar Issue Available: @engcom-Oscar, You will be automatically unassigned. Contributors/Maintainers can claim this issue to continue. To reclaim and continue work, reassign the ticket to yourself. |
Any news on this? We're also experiencing this issue. Pretty annoying to have no magento way to stop the consumers. |
@oneserv-heuser: there is a PR open for this: #31495 However, as mentioned by @giacmir above:
So even after you call Due to this, we kill all consumers during each deploy, as explained a bit over here. |
@magento I am working on this |
@luckyraul @swnsma can you please provide reproduce the issue in local setup with docker? I followed below steps.but I cannot reproduce 1.Run "bin/magento queue:consumers:start product_action_attribute.update" Note:when I close the terminal which is running the command "queue:consumers:start".That time consumer stopped.I can see consumer removed from queue_management.but message is not remove from queue. |
Hello @Kannakiraj123, This issue was solved in #31495 by introducing |
Preconditions (*)
Then you deploy a next release you still have queue consumers running from the old releases. And where is no way to stop them
Steps to reproduce (*)
run cron on the first release
release new version
run cron on the second release
release new version
run cron on the third release
Expected result (*)
only new process running
Actual result (*)
still running old processes
The text was updated successfully, but these errors were encountered: