Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: manually release semaphore permits #22

Closed
abonander opened this issue Apr 2, 2022 · 3 comments · Fixed by #44
Closed

Feature request: manually release semaphore permits #22

abonander opened this issue Apr 2, 2022 · 3 comments · Fixed by #44

Comments

@abonander
Copy link

abonander commented Apr 2, 2022

I'm evaluating this crate as a replacement for futures-intrusive in SQLx for our connection pool implementation as the maintenance status of the former is currently unknown: launchbadge/sqlx#1668 (comment)

The pool internally uses a semaphore where each permit represents the privilege for a task to acquire a connection; once it acquires a permit, it first checks the idle queue to see if an idle connection is available, or if the idle queue is empty then it may open a new connection. The pool starts with a set maximum number of permits, and permits are released when a connection is returned to the idle queue or is closed for whatever reason.

The function of the pool currently requires having the ability to release permits manually to the semaphore, for two main reasons:

  • to sidestep ownership issues with SemaphoreGuard as neither borrowing the semaphore nor keeping it by itself in an Arc is an optimal solution;
    • PoolConnection doesn't want to contain a SemaphoreGuard that borrows into the pool shared state as that's a less ergonomic and flexible API than PoolConnection being 'static.
    • keeping the semaphore in an Arc by itself to satisfy the SemaphoreGuardArc API is annoyingly redundant as the pool already keeps its shared state in an Arc.
    • currently, when a SemaphoreGuard is acquired it's mem::forget()'d to "leak" the permit, then a separate RAII guard that holds an Arc to the pool's shared state calls the method to release the permit on-drop.
  • to signal all tasks waiting to acquire a connection that the pool is closed, usize::MAX / 2 permits are released (the pool maintains the invariant that its capacity plus this value must not overflow, to avoid potential issues there).

I'm also working on a separate close-notification feature for launchbadge/sqlx#1764 but I think that should be a separate waitlist as that's likely to consist of long-lived tasks expecting to wait from spawn until the pool is closed (usually the full lifetime of the process), and I don't want those clogging up the normal waitlist. If it was possible to make them always wait at the back of the list, that would save some work, but I don't think that's important.

As for bikeshedding, futures_intrusive::sync::GenericSemaphore simply calls the function release() whereas tokio::sync::Semaphore calls it add_permits():

Anecdotally, I think release is better as that's what comes to mind with this operation. I overlooked add_permits in the Tokio API initially because I was looking for the keyword "release" when scanning the functions.

Maybe manual_release() to differentiate it from the "automatic release" function of SemaphoreGuard?

@zeenix
Copy link
Member

zeenix commented Apr 3, 2022

whereas tokio::sync::Semaphore calls it add_permits():

That seems to be doing something different: allowing more number of concurrent holders. I'm all for both APIs though.

@abonander
Copy link
Author

abonander commented Apr 3, 2022

They both do the same thing, increase the current number of available permits. You can use it to create additional permits or manually release existing permits that were previously leaked by mem::forget()ing the guard. I want it for the latter.

@Jules-Bertholet
Copy link
Collaborator

This was added in #22.

@notgull notgull closed this as completed May 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

4 participants