Skip to content

Introduce full FIFO pool implementation #1869

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

jasonjoo2010
Copy link

Key Points

  • Using interface instead of specific pointer type
  • Introduce FIFO connection pool implementation
  • Add necessary tests

Notes

In this PR the PoolFIFO bool is replaced with a more extensible PoolType. It just follows our previous implementation and I don't insist on it. But I think a type makes more sense than a boolean. Certainly, we should also consider the backward compatibility.

Detail Design

Please refer to Design on the FIFO connection pool (#1868)

@vmihailenco
Copy link
Collaborator

Thanks for sharing this 👍

As I understand there are 3 improvements:

  1. idleCounter to collect some stats about idle conns
  2. MaxConnAge randomization
  3. linked list instead of slice

Am I missing something?


Those are nice features, but TBH none of them look critical enough to duplicate so much code which is complex enough even now.

I think we can implement 2) by adding some random time to Conn.createdAt so we don't have a situation when all conns are closed at the same moment. That should be a pretty small change which we can do in a separate PR.

And I think we can completely ignore 3) for now - that is a non-critical performance improvement and we can do / not do it later.

That leaves 1) which as I understand collects some stats about the number of idle conns in the pool for the last X seconds. Then it uses it to close some connections.

That is an interesting way to do it even though such connections are not necessary idle - they just happen to be in the pool and not used for some short period of time. I guess we could achieve similar results by just randomly closing a connection once a minute. If it is unused - good, eventually we will close all idle conns. If not, then we will open a new connection. What do you think?

- Using interface instead of specific pointer type
- Introduce FIFO connection pool implementation
- Add necessary tests

Signed-off-by: Jason Joo <hblzxsj@gmail.com>
@jasonjoo2010
Copy link
Author

Thanks for your reply, Vladimir

.........
3. linked list instead of slice

Am I missing something?

I think you have covered all the main points.

I think we can implement 2) by adding some random time to Conn.createdAt so we don't have a situation when all conns are closed at the same moment. That should be a pretty small change which we can do in a separate PR.

It should work. It's an alternated way to do that. And it's definitely a standalone optimisation and can be treated as a small PR.

And I think we can completely ignore 3) for now - that is a non-critical performance improvement and we can do / not do it later.

If we are talking about the FIFO pool, it's a little significant overhead. Because get/put a connection is the most frequent operations, and by copying all the time it's a O(n) time complexity compared to O(1) in stack pool.

That leaves 1) which as I understand collects some stats about the number of idle conns in the pool for the last X seconds. Then it uses it to close some connections.

That is an interesting way to do it even though such connections are not necessary idle - they just happen to be in the pool and not used for some short period of time. I guess we could achieve similar results by just randomly closing a connection once a minute. If it is unused - good, eventually we will close all idle conns. If not, then we will open a new connection. What do you think?

There was a small gap on it. The moving window counter works only for FIFO pool not stack pool. Because in a FIFO pool, no connections could get actually idled except we don't have any traffic which it's a special case.
But we can do it in another way, for example, to identify idle through sampling on the length of the idle list to find out whether we have too much idle connections.
Choose random connections to close is different from this. God decides it. I could say finally, the too much connections will get closed. But it's not necessary if we set MaxConnAge because all connections have the opportunity to get closed.

How do you think of it?

@vmihailenco
Copy link
Collaborator

vmihailenco commented Aug 27, 2021

But it's not necessary if we set MaxConnAge because all connections have the opportunity to get closed.

It is an interesting idea and I guess it should work fine if we add the MaxConnAge improvement you suggested.

Because get/put a connection is the most frequent operations, and by copying all the time it's a O(n) time complexity compared to O(1) in stack pool.

True, but only if we use an efficient linked implementation - no interface{} and no allocations. Otherwise linked list can be slower for small pools.

@jasonjoo2010
Copy link
Author

True, but only if we use an efficient linked implementation - no interface{} and no allocations. Otherwise linked list can be slower for small pools.

Oh really? Do you mean the element structure used by the linked list, or the conversion to and from interface{}?
Yes, but they could be improved. For interface{}, the generic type was introduced from 1.17. Or for both of the two, using our own linked structure and bind it as a wrapper of net.Conn could solve everything, I think.

@vmihailenco
Copy link
Collaborator

Or for both of the two, using our own linked structure and bind it as a wrapper of net.Conn could solve everything, I think.

Yep, and that is even more code which is why I guess it rarely happens in practice and people continue to use slices...

@jasonjoo2010
Copy link
Author

Yep, and that is even more code which is why I guess it rarely happens in practice and people continue to use slices...

Yeah. Or just make it simple, how about open the ability to use a user implemented connection pool when creating a client?
Since we define the interface of the pool but never get it used inside.

@vmihailenco
Copy link
Collaborator

That is a possibility but requires moving and exporting everything (including *Conn) from internal/pool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants