Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After performing a horizon:terminate, new config is not picked up #213

Closed
denjaland opened this issue Oct 30, 2017 · 25 comments
Closed

After performing a horizon:terminate, new config is not picked up #213

denjaland opened this issue Oct 30, 2017 · 25 comments
Labels

Comments

@denjaland
Copy link

denjaland commented Oct 30, 2017

Hi,

I was under the impression that by running horizon:terminate, changes to my .env file would have been picked up. Apparantly though, it doesn't, and I had to manually trigger queue:restart.

Is that expected to be eecuted seperately? Or is this a bug / missing feature from horizon?

I noticed because I change my database connection credentials, and it wasn't picked up by the queue worker even though I did a rebuild of the config cache, and my app was working, but the jobs were failing.

@themsaid
Copy link
Member

You should have terminated Horizon before changing the .env files to ensure no jobs are already being run

@denjaland
Copy link
Author

I agree, but that doesn't answer the question.
No matter how many times I execute horizon:terminate - the .env variables appear to be cached no matter, even for jobs created well past the update of the .ENV file.

Only a queue:restart did do the job, and had the queues pick up my changed configuration.

@themsaid
Copy link
Member

Well it works for me, changing the .env file and terminating horizon, daemon starts it again and the new configs are picked up correctly. Not sure why it's not working for you, you'll need to track it down a bit.

@denjaland
Copy link
Author

hmm - fair enough - I'll try and investigate some more.

@OwenMelbz
Copy link

@denjaland did you get any further with this? We're having similar issues where it seems like horizon isnt actually quitting the workers

@denjaland
Copy link
Author

Hi @OwenMelbz - unfortunately haven't had the time to investigate.
Adding the explicit queue:restart currently solves the issue for us, so less urgent for us now. I'll try and see if I can further look into it though, but not expecting it ths coming two weeks, unfortunately.

@pr4xx
Copy link

pr4xx commented Nov 10, 2017

I guess this is related: While tinkering around, it seems that horizon:terminate does not really terminate the running queue workers. I wondered why new dispatched jobs still get handled even though I did a terminate. After using google, I landed here. Only after queue:restart no more jobs got handled. Is this intended behavior?
My system: macos, valet, fresh laravel and horizon installation

@barryvdh
Copy link
Contributor

I also had to add queue:restart to my deploy scripts before they were getting picked up.

@tomcoonen
Copy link

Same here, am going to try to add the queue:restart again (replaced it by horizon:terminate when moving to horizon).

@mdurao
Copy link

mdurao commented Jan 31, 2018

I was experiencing the same problem and queue:restart did the trick!
Should horizon:terminate do that?!

@iquad
Copy link

iquad commented Feb 23, 2018

For me, even queue:restart did not work.

Had to restart supervisor, seems like working after restart.

@chrisvickers
Copy link

I can second this issue. I now restart supervisor on each deploy through envoyer.

cd {{release}}
php artisan horizon:terminate
supervisorctl restart all

This then moves all of horizon over to the correct new code. Not sure if this is a bug or issue then since this seems to resolve the issue they are having.

Just for sanity sake i do run php artisan horizon:purge once a day to make sure that i am not leaving orphaned processes running into space and using up CPU.

@lionslair
Copy link

supervisorctl restart all

I have similar issues. The first few times things work fine. I get the slack notification that the deployment is finished by forge. However after 1 or two deployments I only get the webhook from my envoy script the deployment had finished. Not from forge.

I updated like you have to restart supervisor and not run the queue:restart It worked. I got the forge completed notification.

@task('daemons') cd {{ $current_release }} php artisan horizon:terminate supervisorctl restart all {{--php artisan queue:restart--}} @endtask

Even manually restarting the daemons or queue was working. Will see how this approach fairs. Saves having to do a server reboot after every X deployment.

@sebastiaanluca
Copy link

Can confirm that executing

php artisan horizon:purge
php artisan horizon:terminate

alone do not make Horizon use updated configuration files. During a deploy to production, we're automatically restarting Horizon using

sudo supervisorctl stop all
php artisan horizon:purge
php artisan horizon:terminate
sudo supervisorctl start all

But locally or without restarting the Supervisor program, a call to php artisan queue:restart is required.

IMO the documentation about queue commands is getting a bit confusing —especially with the release of Horizon. Perhaps we can optimize that, or even add one single command to prevent this confusion? We'd first need some clarification on the advised commands to execute though. Should we stop the Supervisor program, purge all jobs, then terminate Horizon … or just purge, terminate, restart … or … Goal would be to accept new jobs, but halt processing of any during deployment and resume (with new codebase and/or database) after. Some additional advise about quickly restarting the queue in a local environment would be welcome too.

@mfn
Copy link
Contributor

mfn commented Oct 25, 2018

Tested with Laravel 5.7.11 and Horizon 1.4.9, I cannot reproduce this

unless

I cache the config. Then environment variable changes don't get picked up.

But without caching the config, every time I issue a horizon:terminate it terminates, supervisor starts it again and the new env and config is picked up. I see that Jobs which were already in the queue get executed with the new values.

@driesvints driesvints added the bug label Oct 26, 2018
@peterlupu
Copy link
Contributor

peterlupu commented Oct 29, 2018

Can confirm that executing

php artisan horizon:purge
php artisan horizon:terminate

alone do not make Horizon use updated configuration files. During a deploy to production, we're automatically restarting Horizon using

sudo supervisorctl stop all
php artisan horizon:purge
php artisan horizon:terminate
sudo supervisorctl start all

But locally or without restarting the Supervisor program, a call to php artisan queue:restart is required.

IMO the documentation about queue commands is getting a bit confusing —especially with the release of Horizon. Perhaps we can optimize that, or even add one single command to prevent this confusion? We'd first need some clarification on the advised commands to execute though. Should we stop the Supervisor program, purge all jobs, then terminate Horizon … or just purge, terminate, restart … or … Goal would be to accept new jobs, but halt processing of any during deployment and resume (with new codebase and/or database) after. Some additional advise about quickly restarting the queue in a local environment would be welcome too.

Hi.

After lots of testing, this seems to be rock solid in production; been running like this for almost a year:

// ... artisan down, composer install, migrate etc
php artisan config:cache
php artisan route:cache
php artisan horizon:purge
php artisan horizon:terminate
php artisan queue:restart
php artisan up

@georgeboot
Copy link

We had the same issue and came across Supervisor/supervisor#152.

Basically, we updated our Forge daemon to have the path as /home/forge/site.com and change the command to php current/artisan horizon.

@driesvints
Copy link
Member

@denjaland @sebastiaanluca are you still experiencing this problem? I genuinely believe this is a configuration or deployment step problem and not a problem with Horizon itself.

@georgeboot
Copy link

@driesvints I’m pretty sure the issue they are having is the same as I had. It checks all the boxes. I don’t believe the issue is within Horizon

@sebastiaanluca
Copy link

@driesvints For now I would say it's indeed a deployment issue. Dug a little deeper and found the setup that might cause this. Somewhat similar to @georgeboot's issue, I believe.

For instance, in case of Envoyer and/or zero downtime deployments, the Supervisor job stays active with a previous release of the project —regardless of any pause, terminate, or purge commands being executed during deployment. A quick fix seems to be, like mentioned above, to not set the Supervisor job path the the current directory, but the project's root directory and navigate to the current directory in the command itself. Another solution is to restart the Supervisor job —after setting the release live— so it navigates to the new release and restarts Horizon.

For example our new Horizon Supervisor .conf file:

[program:horizon]

process_name=%(program_name)s
directory=/var/www/project
command=/usr/bin/php current/artisan horizon

user=www-data
group=www-data

autostart=true
autorestart=true

stderr_logfile=/var/log/project-horizon.err.log

Previously:

process_name=%(program_name)s
directory=/var/www/project/current
command=/usr/bin/php artisan horizon

The latter stays active in the old release and doesn't point to any new release. Simply because Supervisor once navigated to the current dir symlink which was a previous release before our new deployment.

For completeness: we still pause Horizon before migrating and terminate + purge once set live. Will report any additional findings. Thanks for the support!

@sebastiaanluca
Copy link

sebastiaanluca commented Jan 29, 2019

Quick follow-up: still an issue for me. I can pause, terminate, restart, etc but the Supervisor job / Horizon won't pick up the new release. The process stays alive and keeps using the old directory.

Anything I'm missing here? php artisan horizon:terminate should terminate the process and Supervisor should start a new one, right? If so, then it would boot it again and navigate into the new current directory which it doesn't seem to do. The only thing that works is restarting the process via Supervisor itself using sudo supervisorctl restart project.

@georgeboot Can you confirm your setup works? In hindsight, regardless if Supervisor cd's into the current dir first or the command itself does so, if the Supervisor process stays alive throughout releases, the current symlink will always be the one the process started with. So if you start the job with release 5 active and symlinked, no matter how many more you deploy, the active process will keep using that release as it only starts the process in that dir once.

For example, what doesn't work:

$ php artisan queue:restart
Broadcasting queue restart signal.
$ php artisan horizon:terminate
Sending TERM Signal To Process: 1281
$ php artisan horizon:pause
Sending USR2 Signal To Process: 1281
$ php -r "posix_kill(1281, SIGTERM);"

Yet the process keeps running and in case of pause, it stays active on the dashboard. The last command is what is executed in the Laravel job (php artisan horizon:terminate). If it doesn't work via the CLI, it surely won't work in code. If anyone can confirm if that last command does or does not work, we can exclude if this is a bug or just something in my environment.

Any of the following commands do work:

$ sudo supervisorctl restart horizon
$ kill -SIGTERM 1281

Edit: it was a simple user/permissions issue 😶 If your Supervisor process runs as user A, but you terminate Horizon as B, it won't do anything and not even throw an error or warning. Perhaps we can implement http://php.net/manual/en/function.posix-get-last-error.php in the different Horizon jobs that call posix_kill()? Would've saved me a few days 😅

@mfn
Copy link
Contributor

mfn commented Jan 29, 2019

it was a simple user/permissions issue 😶

So is it solved now for you?

php artisan horizon:terminate should terminate the process and Supervisor should start a new one, right?

Yes, absolutely.
This is the first thing at which investigation should have stopped. If this doesn't work, the setup itself is wrong in some way.

@driesvints
Copy link
Member

@sebastiaanluca can you try replace the foreach loop in the terminate command with the following and copy/paste the output here while recreating the permission error?

        foreach (array_pluck($masters, 'pid') as $processId) {
            $this->info("Sending TERM Signal To Process: {$processId}");

            posix_kill($processId, SIGTERM);

            if ($error = posix_get_last_error()) {
                $this->error("POSIX error for Process: {$processId}: ".posix_strerror($error));
            }
        }

@driesvints
Copy link
Member

@sebastiaanluca I sent in a PR for this: #485

@driesvints
Copy link
Member

I'm going to close this as this obviously is a configuration/deployment setup issue and not an issue with Horizon itself. This can best be explained in a blog post or a tutorial somewhere. If you feel that anything is missing from the docs for deploying feel free to send in a PR to https://laravel.com/docs/5.7/horizon#running-horizon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests