-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(Postgres Node): Backport connection pooling to postgres v1 #12484
refactor(Postgres Node): Backport connection pooling to postgres v1 #12484
Conversation
|
||
const db = pgp(config); | ||
await db.connect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would have left the connection hanging. It needs to be ended:
https://vitaly-t.github.io/pg-promise/Database.html#connect
The connection must be released in the end of the chain by calling done() on the connection object.
@@ -41,8 +39,8 @@ export async function postgresConnectionTest( | |||
message, | |||
}; | |||
} finally { | |||
if (pgpClientCreated) { | |||
pgpClientCreated.end(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would shut down all pools in the process:
https://vitaly-t.github.io/pg-promise/module-pg-promise.html#~end
Shuts down all connection pools created in the process, so it can terminate without delay. It is available as pgp.end, after initializing the library.
Doing this would render any pool for any credential unusable until restarting n8n or waiting for the pool to be destroyed by the connection pool manager and every execution would throw this:
Connection pool of the database object has been destroyed.
Codecov ReportAttention: Patch coverage is 📢 Thoughts on this report? Let us know! |
|
n8n Run #8772
Run Properties:
|
Project |
n8n
|
Branch Review |
backport-connection-pooling-to-postgres-v1
|
Run status |
Passed #8772
|
Run duration | 04m 55s |
Commit |
9ce7027c46: 🌳 🖥️ browsers:node18.12.0-chrome107 🤖 despairblue 🗃️ e2e/*
|
Committer | Danny Martini |
View all properties for this run ↗︎ |
Test results | |
---|---|
Failures |
0
|
Flaky |
2
|
Pending |
0
|
Skipped |
0
|
Passing |
489
|
View all changes introduced in this branch ↗︎ |
78ad9be
to
a98c60d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tested locally and it works 🥳 lgtm!
|
Any news? release 1.75.0 has not been resolved. |
a98c60d
to
9ce7027
Compare
✅ All Cypress E2E specs passed |
@HermesMacedo This will be part of 1.75.1 which is scheduled to be released today. |
Many people are waiting the fix to be deployed 🥲 Thanks for that fix @despairblue 🙌🏻 |
Got released with |
The problem still persists for me in version |
@janober The problem still persists for me in version 1.75.1 It responds on the first interaction, but the error occurs on subsequent interactions. { |
Same here on both versions. Cloud hosted. All subsequent interactions fail. |
Please let's move the conversation to the issue #12517 . That way we don't have to keep posting the same message in multiple places. |
Summary
This uses the pool manager for postgres v1, but additionally it also fixes the credential test for postgres which before would shut down the shared pool and lead to
Connection pool of the database object has been destroyed.
on any subsequent execution.Taken from the pg-promise docs:
Thus it's not necessary to release connections manually. They are managed by pg-promise.
You can verify this using these queries and constructing a workflow that executes them once a second:
Lists all open connections per db, grouped by state.
Long running query to simulated load.
By default the pool has a size of 10. If we need more then the next execution hangs until the next connection is free.
That limit is way too low.
I set it to 10_000 to test what happens if I ran out of the 100 connections my local postgres allows:
Also quickly after deactivating the workflow the pool was cleaned up and all 100 connections have been available again.
I would like to set it to unlimited for now, but that's not possible:
brianc/node-postgres#1977
But for that reason I made it configurable in the credential.
Related Linear tickets, Github issues, and Community forum posts
Fixes #12517
NODE-2240
Review / Merge checklist
Docs updated or follow-up ticket created.release/backport
(if the PR is an urgent fix that needs to be backported)