-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool handling of database connectivity #944
Comments
I appreciate the help with this @vitaly-t |
As a followup, if I use the const options = {
error(e, ctx) {
if ('client' in ctx && e.message == 'Query read timeout') {
ctx.client.end()
}
}
} |
Maybe you can detect connection issues, as done within Robust-Listeners, and re-create the database object when connection is lost, which will re-create the Pool also. |
@vitaly-t I'm not sure if Is there anyway for the underlying |
I'm not really sure, tbh. Query timeouts were added only recently in the underlying driver, it would require an investigation of how they really work underneath, plus some heavy testing. I'm not even sure if timeouts in the driver are supported on the driver level or on the server level, as that would have a significant effect on the subject. |
I'll dig into the underlying driver more tomorrow - can you think of any negative impact on calling Just trying to determine if we can sort of hack a solution for now over having to consider an alternative |
Expected behavior
I am trying to ensure that we have a good grasp on how pg-promise pooling works with various network outages. I have a local setup where I have a pgp pool setup with a connection timeout, query timeout, idle timeout and an allowance of up to 10 connections in the pool.
I'm running into some interesting behavior if I kill the database after the pool has connected at least once. The subsequent queries reuse an idle connection which will then fail with a query timeout and get put back into the pool rather than failing with an actual connection timeout and removed from the pool altogether. This continues into a loop where the same client keeps getting reused, so long as it doesn't timeout, and it will never hit the connect timeout which would have the client thrown away.
Is there anyway, such as within an error handler, to indicate back to the pool that the client should not be reused and instead thrown away? Ultimately, if a database was suddenly inaccessible, I would expect each idle connection to fail and new connections fail on a connect error ultimately leading to alerting in our telemetry stack. Instead, we just get a ton of query timeouts and the same clients keep getting reused.
Actual behavior
Idle clients in the pool continue to get reused when a database is suddenly inaccessible and throw a query timeout instead of the correct connection timeout indicating that the database cannot be communicated with.
Steps to reproduce
In the above example, if your clients never fully release from idle timeout you will never hit the connection timeout error
Environment
The text was updated successfully, but these errors were encountered: