Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(common): added postgress async pool wrapper #1631

Closed
wants to merge 1 commit into from

Conversation

LNSD
Copy link
Contributor

@LNSD LNSD commented Mar 30, 2023

Warning
This PR is a work in progress

Context

In this comment: #1590 (comment)
And in this issue: #1604

Description

To overcome the blocking nature of the standard library's db_postgres, I mimicked treeform's pg package implementation. I wrapped the connections in an asynchronous connection pool.

@LNSD LNSD self-assigned this Mar 30, 2023
@LNSD
Copy link
Contributor Author

LNSD commented Mar 30, 2023

I started porting treeform's pg library from asyncdispatch to nim-chronos, but things went out of hand...

@arnetheduck @jm-clius @cammellos Can I get a first review from you?

## Database connection pool

type PgAsyncPoolState {.pure.} = enum
Live,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use Closed as the default state (first element of the enum) to avoid bugs


# wait for a free connection without blocking the async runtime
while pool.isBusy():
await sleepAsync(0.milliseconds)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 ms better, I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, you can probably do a poll-free version by using an Queue that would you check in relaseConn (or maybe AsyncQueue)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AsyncQueue? 👀 👀 👀 👀 👀 👀 👀 👀

poolOptions: PgAsyncPoolOptions

state: PgAsyncPoolState
conns: seq[DbConn]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I would have created a new type to hold the DbConn & bool instead of two seq

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's another possibility, yes 🙂 Some parts come from treeforms's implementation. This is one of them.

$pqErrorMessage(db)


proc open*(options: PgConnOptions): common.PgResult[DbConn] =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This proc is not actually async, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No... It is sync. Just a convenience wrapper that takes the Pg options and returns a result.

Copy link
Contributor

@Menduist Menduist Mar 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So personally I would just have a numConnections instead of min/max, don't want my program to randomly do blocking IO when I reach an arbitrary limit. Since the connections >min are not closed, you would probably end up with always maxConnections anyway
Though would also have to check what happens if for instance, the server restarts (or worst, just shuts down)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though would also have to check what happens if, for instance, the server restarts (or worst, just shuts down)

Absolutely! There are a thousand corner cases that I am not covering here. But for time's sake, I am trying to have a little MVP covering the "happy-path" cases, and we can iteratively improve the implementation.

return err(db.error)

if pqisBusy(db) == 1:
await sleepAsync(0.milliseconds) # Do not block the async runtime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's good for a PoC, but ideally we would register the FD in chronos and wait for data to be available

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fully agree 😁

If we can find a Sans-IO version of the Postgres protocol we can try that. For now... I think that, if we don't want to reimplement the Postgres protocol, we should stick to this approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't have to reimplement postgres

https://nim-lang.org/docs/postgres.html#pqsocket%2CPPGconn
This proc will give you the FD of the underlying postgres connection

You would you have to do something like:

while pqIsBusy(db) == 1:
  await waitForRead(AsyncSocket(pqsocket(db))
  pqconsumeInput(db)

Unfortunately, chronos doesn't has a waitForRead, so that would need to be added
Again, fine for a first version, but something to improve on later IMO

More info on the async usage of pg here: https://www.postgresql.org/docs/current/libpq-async.html
And an example in C : https://github.com/Menduist/sample_coroutine_api/blob/master/src/db/re_pgsql.c (callback based)

@Ivansete-status
Copy link
Collaborator

I close this PR as the issue has been addressed by the next PR, which is fundamentally inspired by that one being closed now. I submitted a different PR in order to respect the original changes

#1779

Thanks so much for that @LNSD !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants