Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid allocations in HTTP/1.1 connection pool #102108

Closed
wants to merge 1 commit into from

Conversation

MihaZupan
Copy link
Member

@MihaZupan MihaZupan commented May 11, 2024

#99364 switched the H1 connection pool to using a ConcurrentStack as the store of connections.
While that change significantly improved throughout (particularly under contention), it does mean we're allocating an extra 32 bytes every time we push to the connection stack (lack of DCAS).

This PR tweaks the backing store to use an extra head pointer we hit first before falling back to the ConcurrentStack. Because we're never storing null connections, we can use null as a sentinel for unset, and skip the allocations on the happy path.
That makes alternating push/pops (which is quite common) alloc-free.

Method Toolchain Mean Error Ratio Allocated Alloc Ratio
SendAsync main 484.9 ns 0.82 ns 1.00 584 B 1.00
SendAsync pr 473.5 ns 0.94 ns 0.98 552 B 0.95

Throughput-wise it's within noise for both YARP and HttpClient crank scenarios.

Under very high contention (5 cores doing the in-memory connection pool stress (SendAsync above)), results vary greatly between runs. But averaged out the updated variant does come out on top.

Average over 50 runs each: main=3046k pr=3569k. (blue == main)

image

My guess is that this is happening because we're splitting the contention over two memory locations (some push/pops would be served by the new head without going into spin loops on the stack).


While this isn't that much code, we'd likely delete it whenever #31911 gets implemented and we start using it in ConcurrentStack.

@MihaZupan MihaZupan added this to the 9.0.0 milestone May 11, 2024
@MihaZupan MihaZupan requested a review from a team May 11, 2024 05:40
@MihaZupan MihaZupan self-assigned this May 11, 2024
Copy link
Contributor

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

{
/// <summary>
/// A <see cref="ConcurrentStack{T}"/> with an extra head pointer to opportunistically avoid allocations on pushes.
/// In situations where Push/Pop operations frequently occur in pairs (common under steady load), this avoids most allocations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we be better off then having one (or more such) dedicated fields then falling back to a locked stack?

Copy link
Member Author

@MihaZupan MihaZupan May 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's way better than just a locked stack, but it is a bit worse than the current approach: about -2 % on the HttpClient benchmark with a lock, -1.5% with a SpinLock and extra padding to split the lock and the head.

And for the 5-core stress I mentioned in the original description:
main: 3046k
pr: 3569k
lockedStack: 3123k
spinlockedStack: 3083k

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pr: 3569k
lockedStack: 3123k

That suggests we're still frequently hitting the fallback case. How deep does the stack get?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the rate of going to the fallback is (roughly):

  • stress: 30%
  • httpclient: 45%
  • yarp: 70%

With two values before the fallback instead of 1 it's:

  • stress: 14%
  • httpclient: 25%
  • yarp: 60%

Yarp numbers aren't great here, there is a much bigger gap between returning a connection and renting it back, so there's often more than 1 connection on the pool.

This might not be worth the extra logic

@MihaZupan MihaZupan closed this May 19, 2024
@github-actions github-actions bot locked and limited conversation to collaborators Jun 19, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants