-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sharing default LoopResources
without colocation and without creating additional event loop groups
#2781
Comments
Hi @mp911de , is there an available github repo for the maria-db connector scenario that is described in mariadb-corporation/mariadb-connector-r2dbc#65 ? thanks. |
Hi @mp911de , Also, regarding mariadb-corporation/mariadb-connector-r2dbc#65, I'd like to know:
thanks. |
The used Reactor version has been We noticed the same behavior with Postgres as. |
here are some updates: I also wanted to verify the use case from mariadb-corporation/mariadb-connector-r2dbc#65 scenario with the I observed the following behaviors:
Results:
Results:
Results:
So, for the moment, before going ahead on implementing the feature request from this current issue, I will need to first fully understand the problem and I'm currently focusing on the reactor-pool. |
@mp911de , Before going ahead on implementing your change request, could you please have a look at reactor/reactor-pool#171 ? will you be able to test it ? it is for reactor-pool 1.0.1, so it will require reactor-netty 1.1.x. let me know ? thanks |
@mp911de , so as promised there is a new PR which proposes a way to disable colocation for any existing LoopResources (PR 2842). Can you take a look ? (However, I think that it's also worth to consider the reactor-pool PR reactor/reactor-pool#171, which might resolve the problem. let me know, thanks ! |
These changes look exciting. I think R2DBC drivers will default to disabled colocation once the patch is out. I've seen also the Pool change and it makes totally sense to allocate connections in parallel. |
Isn't there an API with which you can do this
Also if you have the fix in the Reactor Pool then you don't need to disable the colocation. |
Ideally, we can get hold of the default event loops and have an instance with disabled colocation (something along the lines of |
There is a problem with this approach … till now our configuration is
immutable and at any time we know with what kind of configuration a given
LoopResources is running, while now we will have one and the same
LoopResources that in some cases may run with colocation enabled, in other
with disabled and this is not good … you cannot debug such situations
…On Fri, 30 Jun 2023 at 11:39, Mark Paluch ***@***.***> wrote:
Ideally, we can get hold of the default event loops and have an instance
with disabled colocation (something along the lines of
LoopResources.getDefault().disableColocation()) to avoid excessive thread
pool creation.
—
Reply to this email directly, view it on GitHub
<#2781 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFKCVO5NFEIOQHCTW7IHATXN2GCJANCNFSM6AAAAAAXKUSNEU>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
About
|
We can resort to using native transports (e.g. sockets for Postgres), so we would need to add that logic back into our code. Also, we would create additional thread pools instead of reusing existing ones. |
I'm closing this issue, because after some further evaluation and consideration, it turns out that the problem this GH issue aims to resolve already has multiple existing solutions that can address the issue at hand. The initial problem is that during R2DBC warmup, if minSize > 0, then one single TCP event loop will actually handle all DB connections. This is because the default TCP LoopResources is using colocation. So, the existing options which can address the problem without having to create additional threads with a custom LoopResources/colocation=OFF are the following:
The following patch could be considered in the r2dbc-pool ConnectionPool:
If the current thread is not TCP (which is the case in a typical HTTP web flux application), then the above proposed patch ensures that all pre-allocated DB resources will be acquired from the current HTTP thread, meaning we won't have the colocation issue anymore.
The above will wrap the existing default TCP loop resources, but the All in all, my opinion is that the option 2 safely resolves the problem and it even does not need the enhancement from the reactor-pool 171 issue. |
Thanks a lot for your guidance and the time you spent here. Using warmup parallelism in combination with |
Motivation
Out of the discussion at r2dbc/r2dbc-pool#190 (comment) and mariadb-corporation/mariadb-connector-r2dbc#65, we found that colocation for long-lived connection can negatively impact performance because new long-lived database connections are created on the same event loop.
While it is now possible to create
LoopResources
without colocation, it would be good to be able to reuse the underlyingEventLoopGroup
s of the default instance without colocation to avoid creation of additional threads. Right now, in an arrangement with Reactor Netty and two R2DBC drivers (Postgres and MariaDB), an application would have:LoopResources
instance used by defaultLoopResources
instance for MariaDBLoopResources
instance for PostgresThese would effectively create three
EventLoopGroup
s with three times the number of Threads imposing additional resource usage.Desired solution
Controlling colocation on the
runOn(…)
level would be neat.LoopResources
bear quite some complexity regarding event loop group selection in addition to native transport selection, so having a simple way to reuse default event loop groups without colocation would be great.Considered alternatives
As outlined before,
TcpClient.runOn(…)
would work for disabling colocation at the cost of additional event loops.A system property switch is not a good alternative because other components, that benefit from colocation would suffer from disabling colocation.
Another alternative would be a wrapper in each R2DBC driver that wraps the default
EventLoopGroup
and unwraps it to obtain the non-colocated instance at the cost of complexity in each project.The text was updated successfully, but these errors were encountered: