-
-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting Postgres Errors on killing development server after setting up Goodjob #692
Comments
Hmm. Looks like the Postgres Database itself doesn't have any spare connections to give out. Could you try working through this SO issue? https://stackoverflow.com/a/13595817 I think it could either be:
I think if you can help me identify what is using all of the connections, I can help identify an improvement (it's also not good that your terminal is getting locked up). |
I think that the issue I have been having might be related. I am hosting the app and database on Heroku but sometimes run my local server (still with Heroku db, which has a 20-connection limit). When I am trying to shut down the local server, it gets stuck pretty much every time. For example, I drop all Postgres connections. I run the local server and shut it down as soon as it is up. So here is what I get:
Then I have to kill the terminal. It seems that the shutdown process exhausts all the connections. So then when I start it again I sometimes end up having the too many connections error right from the start. (Same issue with a locally run Postgres.) |
@alokhmutov ooh, thank you! That could be related. Given that you're able to generate this in development, would you be able to try running with a new branch of GoodJob I just pushed; it should print out a much better logger message that I'd love if you shared: gem "good_job", github: "bensheldon/good_job", branch: "error_classes" |
@bensheldon oh, thank you! It really makes it much easier to debug... the problem with Sentry! :) It has adapters for Delayed Job and the like, but transaction reporting fails because of nil context. (I was wondering why GoodJob reported an error with get_transaction but I couldn't find the method in GoodJob...) Apparently, this error is what causes a loop, and in the process it might be generating new connections instead of closing them (?). I didn't realise Sentry could be an issue when I moved from DJ to GJ. If I turn Sentry off, it works, so thank you and sorry if my issue here might have been better addressed elsewhere! |
@alokhmutov no worries! We're all learning here 😄 Could you share the previous code you were using? I'm also a Sentry user so I'm curious what is causing the trouble. |
@bensheldon hopefully, I'm not crowding the wrong issue. Turns out, transaction reporting context fails to be set only in async mode, external and inline work fine. Does aysnc with Sentry work fine in your setup? I am wondering then if I missed something important at setup. |
@alokhmutov oh! I'm still using the older Raven reporter in my project. Let me try the newer gem and see if that helps me reproduce the problem. And no worries, this is a good issue for that. |
hmm. Sentry appears to be working correctly for me. This is the configuration I'm using: config.good_job.on_thread_error = lambda do |exception|
Sentry.capture_exception(exception)
end |
I'm using the same config for exceptions, this one works. But the culprit is not exception tracing but transaction monitoring. If enabled in sentry config via
it gives me the error above in async mode on shutting down the server. |
@alokhmutov ok, I see that now. I'm only seeing those errors when I don't have the https://github.com/bensheldon/good_job#execute-jobs-async--in-process # config/puma.rb
before_fork do
GoodJob.shutdown
end
on_worker_boot do
GoodJob.restart
end
on_worker_shutdown do
GoodJob.shutdown
end
MAIN_PID = Process.pid
at_exit do
GoodJob.shutdown if Process.pid == MAIN_PID
end When I do have that config, I see this, which I think is correct: ^C- Gracefully stopping, waiting for requests to finish
=== puma shutdown: 2022-08-14 08:15:03 -0700 ===
- Goodbye!
Exiting
(1.2ms) UNLISTEN *
[GoodJob] Notifier unsubscribed with UNLISTEN
GoodJob::Lockable Owns Advisory Lock? (2.1ms) SELECT 1 AS one FROM pg_locks WHERE pg_locks.locktype = 'advisory' AND pg_locks.objsubid = 1 AND pg_locks.classid = ('x' || substr(md5($1::text), 1, 16))::bit(32)::int AND pg_locks.objid = (('x' || substr(md5($2::text), 1, 16))::bit(64) << 32)::bit(32)::int AND pg_locks.pid = pg_backend_pid() [["key", "good_job_processes-41dbb70a-32f8-4f51-894f-7f5c628e7e58"], ["key", "good_job_processes-41dbb70a-32f8-4f51-894f-7f5c628e7e58"]]
TRANSACTION (0.2ms) BEGIN
GoodJob::Process Destroy (0.7ms) DELETE FROM "good_job_processes" WHERE "good_job_processes"."id" = $1 [["id", "41dbb70a-32f8-4f51-894f-7f5c628e7e58"]]
TRANSACTION (1.5ms) COMMIT
GoodJob::Lockable Advisory Unlock (0.7ms) SELECT pg_advisory_unlock(('x'||substr(md5($1::text), 1, 16))::bit(64)::bigint) AS unlocked [["key", "good_job_processes-41dbb70a-32f8-4f51-894f-7f5c628e7e58"]]
[GoodJob] [38909] GoodJob shutting down scheduler...
[GoodJob] [38909] GoodJob scheduler is shut down.
Shutting down background worker
Killing session flusher |
Oh boy, that's it, thank you so much, I did somehow miss it! |
@alokhmutov Fantastic! 🎉 @udit99 I'm going to close this issue. Please comment if you're stilling having problems. |
Every second time I kill my local dev server, I'm getting the following errors and I end up having to kill the terminal:
[GoodJob] Notifier errored: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: sorry, too many clients already
Any idea whats causing this?
Thanks
The text was updated successfully, but these errors were encountered: