-
Notifications
You must be signed in to change notification settings - Fork 667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After performing a horizon:terminate, new config is not picked up #213
Comments
You should have terminated Horizon before changing the .env files to ensure no jobs are already being run |
I agree, but that doesn't answer the question. Only a queue:restart did do the job, and had the queues pick up my changed configuration. |
Well it works for me, changing the .env file and terminating horizon, daemon starts it again and the new configs are picked up correctly. Not sure why it's not working for you, you'll need to track it down a bit. |
hmm - fair enough - I'll try and investigate some more. |
@denjaland did you get any further with this? We're having similar issues where it seems like horizon isnt actually quitting the workers |
Hi @OwenMelbz - unfortunately haven't had the time to investigate. |
I guess this is related: While tinkering around, it seems that horizon:terminate does not really terminate the running queue workers. I wondered why new dispatched jobs still get handled even though I did a terminate. After using google, I landed here. Only after |
I also had to add |
Same here, am going to try to add the queue:restart again (replaced it by horizon:terminate when moving to horizon). |
I was experiencing the same problem and |
For me, even Had to restart supervisor, seems like working after restart. |
I can second this issue. I now restart supervisor on each deploy through envoyer.
This then moves all of horizon over to the correct new code. Not sure if this is a bug or issue then since this seems to resolve the issue they are having. Just for sanity sake i do run |
I have similar issues. The first few times things work fine. I get the slack notification that the deployment is finished by forge. However after 1 or two deployments I only get the webhook from my envoy script the deployment had finished. Not from forge. I updated like you have to restart supervisor and not run the queue:restart It worked. I got the forge completed notification.
Even manually restarting the daemons or queue was working. Will see how this approach fairs. Saves having to do a server reboot after every X deployment. |
Can confirm that executing php artisan horizon:purge
php artisan horizon:terminate alone do not make Horizon use updated configuration files. During a deploy to production, we're automatically restarting Horizon using sudo supervisorctl stop all
php artisan horizon:purge
php artisan horizon:terminate
sudo supervisorctl start all But locally or without restarting the Supervisor program, a call to IMO the documentation about queue commands is getting a bit confusing —especially with the release of Horizon. Perhaps we can optimize that, or even add one single command to prevent this confusion? We'd first need some clarification on the advised commands to execute though. Should we stop the Supervisor program, purge all jobs, then terminate Horizon … or just purge, terminate, restart … or … Goal would be to accept new jobs, but halt processing of any during deployment and resume (with new codebase and/or database) after. Some additional advise about quickly restarting the queue in a local environment would be welcome too. |
Tested with Laravel 5.7.11 and Horizon 1.4.9, I cannot reproduce this unlessI cache the config. Then environment variable changes don't get picked up. But without caching the config, every time I issue a |
Hi. After lots of testing, this seems to be rock solid in production; been running like this for almost a year:
|
We had the same issue and came across Supervisor/supervisor#152. Basically, we updated our Forge daemon to have the path as |
@denjaland @sebastiaanluca are you still experiencing this problem? I genuinely believe this is a configuration or deployment step problem and not a problem with Horizon itself. |
@driesvints I’m pretty sure the issue they are having is the same as I had. It checks all the boxes. I don’t believe the issue is within Horizon |
@driesvints For now I would say it's indeed a deployment issue. Dug a little deeper and found the setup that might cause this. Somewhat similar to @georgeboot's issue, I believe. For instance, in case of Envoyer and/or zero downtime deployments, the Supervisor job stays active with a previous release of the project —regardless of any pause, terminate, or purge commands being executed during deployment. A quick fix seems to be, like mentioned above, to not set the Supervisor job path the the current directory, but the project's root directory and navigate to the current directory in the command itself. Another solution is to restart the Supervisor job —after setting the release live— so it navigates to the new release and restarts Horizon. For example our new Horizon Supervisor .conf file: [program:horizon]
process_name=%(program_name)s
directory=/var/www/project
command=/usr/bin/php current/artisan horizon
user=www-data
group=www-data
autostart=true
autorestart=true
stderr_logfile=/var/log/project-horizon.err.log Previously: process_name=%(program_name)s
directory=/var/www/project/current
command=/usr/bin/php artisan horizon The latter stays active in the old release and doesn't point to any new release. Simply because Supervisor once navigated to the current dir symlink which was a previous release before our new deployment. For completeness: we still pause Horizon before migrating and terminate + purge once set live. Will report any additional findings. Thanks for the support! |
Quick follow-up: still an issue for me. I can pause, terminate, restart, etc but the Supervisor job / Horizon won't pick up the new release. The process stays alive and keeps using the old directory. Anything I'm missing here? @georgeboot Can you confirm your setup works? In hindsight, regardless if Supervisor For example, what doesn't work:
Yet the process keeps running and in case of pause, it stays active on the dashboard. The last command is what is executed in the Laravel job ( Any of the following commands do work:
Edit: it was a simple user/permissions issue 😶 If your Supervisor process runs as user A, but you terminate Horizon as B, it won't do anything and not even throw an error or warning. Perhaps we can implement http://php.net/manual/en/function.posix-get-last-error.php in the different Horizon jobs that call |
So is it solved now for you?
Yes, absolutely. |
@sebastiaanluca can you try replace the foreach loop in the terminate command with the following and copy/paste the output here while recreating the permission error? foreach (array_pluck($masters, 'pid') as $processId) {
$this->info("Sending TERM Signal To Process: {$processId}");
posix_kill($processId, SIGTERM);
if ($error = posix_get_last_error()) {
$this->error("POSIX error for Process: {$processId}: ".posix_strerror($error));
}
} |
@sebastiaanluca I sent in a PR for this: #485 |
I'm going to close this as this obviously is a configuration/deployment setup issue and not an issue with Horizon itself. This can best be explained in a blog post or a tutorial somewhere. If you feel that anything is missing from the docs for deploying feel free to send in a PR to https://laravel.com/docs/5.7/horizon#running-horizon |
Hi,
I was under the impression that by running
horizon:terminate
, changes to my .env file would have been picked up. Apparantly though, it doesn't, and I had to manually triggerqueue:restart
.Is that expected to be eecuted seperately? Or is this a bug / missing feature from horizon?
I noticed because I change my database connection credentials, and it wasn't picked up by the queue worker even though I did a rebuild of the config cache, and my app was working, but the jobs were failing.
The text was updated successfully, but these errors were encountered: