Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing default LoopResources without colocation and without creating additional event loop groups #2781

Closed
mp911de opened this issue Apr 25, 2023 · 14 comments
Assignees
Labels
for/reactor-pool This belongs to the Reactor Pool project

Comments

@mp911de
Copy link

mp911de commented Apr 25, 2023

Motivation

Out of the discussion at r2dbc/r2dbc-pool#190 (comment) and mariadb-corporation/mariadb-connector-r2dbc#65, we found that colocation for long-lived connection can negatively impact performance because new long-lived database connections are created on the same event loop.

While it is now possible to create LoopResources without colocation, it would be good to be able to reuse the underlying EventLoopGroups of the default instance without colocation to avoid creation of additional threads. Right now, in an arrangement with Reactor Netty and two R2DBC drivers (Postgres and MariaDB), an application would have:

  • The default LoopResources instance used by default
  • A customized LoopResources instance for MariaDB
  • A customized LoopResources instance for Postgres

These would effectively create three EventLoopGroups with three times the number of Threads imposing additional resource usage.

Desired solution

Controlling colocation on the runOn(…) level would be neat. LoopResources bear quite some complexity regarding event loop group selection in addition to native transport selection, so having a simple way to reuse default event loop groups without colocation would be great.

Considered alternatives

As outlined before, TcpClient.runOn(…) would work for disabling colocation at the cost of additional event loops.

A system property switch is not a good alternative because other components, that benefit from colocation would suffer from disabling colocation.

Another alternative would be a wrapper in each R2DBC driver that wraps the default EventLoopGroup and unwraps it to obtain the non-colocated instance at the cost of complexity in each project.

@mp911de mp911de added status/need-triage A new issue that still need to be evaluated as a whole type/enhancement A general enhancement labels Apr 25, 2023
@violetagg violetagg removed the status/need-triage A new issue that still need to be evaluated as a whole label May 1, 2023
@violetagg violetagg self-assigned this May 1, 2023
@pderop pderop assigned pderop and unassigned violetagg Jun 9, 2023
@pderop
Copy link
Contributor

pderop commented Jun 12, 2023

Hi @mp911de ,

is there an available github repo for the maria-db connector scenario that is described in mariadb-corporation/mariadb-connector-r2dbc#65 ?

thanks.

@pderop
Copy link
Contributor

pderop commented Jun 13, 2023

Hi @mp911de ,

Also, regarding mariadb-corporation/mariadb-connector-r2dbc#65, I'd like to know:

  • what version of reactor-netty is used
  • in the scenario, are the requests on TestController done using HTTP/1.1 or HTTP/2 ?

thanks.

@mp911de
Copy link
Author

mp911de commented Jun 13, 2023

The used Reactor version has been 2020.0.27. The TestController was invoked via HTTP/1.1. The test case can be simplified to a standalone JUnit test, see https://github.com/PiotrDuz/r2dbc-pool-size/blob/master/src/test/java/com/example/R2dbctestTest.java and r2dbc/r2dbc-pool#190 (comment)

We noticed the same behavior with Postgres as.

@pderop
Copy link
Contributor

pderop commented Jun 14, 2023

here are some updates:

I also wanted to verify the use case from mariadb-corporation/mariadb-connector-r2dbc#65 scenario with the maria-db connector, so I've done the similar reproducer project using org.springframework.boot:spring-boot-starter-parent:3.1.0, and I confirm that I reproduce the performance issue.

I observed the following behaviors:

  • when using the following configuration, the queries from the sample are good and queries are executed concurrently:
spring.r2dbc.pool.initial-size=1
spring.r2dbc.pool.max-size=5

Results:

2023-06-14T11:50:45.686+02:00  INFO 15259 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : all loaded4
2023-06-14T11:50:45.686+02:00  INFO 15259 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:45.820+02:00  INFO 15259 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : all loaded3
2023-06-14T11:50:45.820+02:00  INFO 15259 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:45.848+02:00  INFO 15259 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : all loaded5
2023-06-14T11:50:45.848+02:00  INFO 15259 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:45.870+02:00  INFO 15259 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : all loaded2
2023-06-14T11:50:45.870+02:00  INFO 15259 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:45.914+02:00  INFO 15259 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded1
2023-06-14T11:50:45.914+02:00  INFO 15259 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.688+02:00  INFO 15259 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : all loaded6
2023-06-14T11:50:47.688+02:00  INFO 15259 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.710+02:00  INFO 15259 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : all loaded7
2023-06-14T11:50:47.710+02:00  INFO 15259 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.754+02:00  INFO 15259 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : all loaded9
2023-06-14T11:50:47.754+02:00  INFO 15259 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.829+02:00  INFO 15259 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : all loaded8
2023-06-14T11:50:47.830+02:00  INFO 15259 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.838+02:00  INFO 15259 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded10
2023-06-14T11:50:47.838+02:00  INFO 15259 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:50:47.839+02:00  INFO 15259 --- [nio-8080-exec-1] com.mariadb.todo.services.TaskService    : Elapsed time:4700
  • and indeed, the queries are serialized when using:
spring.r2dbc.pool.initial-size=5
spring.r2dbc.pool.max-size=5

Results:

2023-06-14T11:49:37.022+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded1
2023-06-14T11:49:37.022+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:37.606+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded2
2023-06-14T11:49:37.606+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:39.089+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded3
2023-06-14T11:49:39.089+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:39.785+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded4
2023-06-14T11:49:39.785+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:41.189+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded5
2023-06-14T11:49:41.190+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:41.972+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded6
2023-06-14T11:49:41.972+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:43.306+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded7
2023-06-14T11:49:43.306+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:44.171+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded8
2023-06-14T11:49:44.171+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:45.415+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded9
2023-06-14T11:49:45.416+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:45.896+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded10
2023-06-14T11:49:45.896+02:00  INFO 14851 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:49:45.896+02:00  INFO 14851 --- [nio-8080-exec-1] com.mariadb.todo.services.TaskService    : Elapsed time:11256
  • using initial-size=5 and max-size=5 with the following patch which disables colocation improves concurrency, but the parallelism is not as good as when using initial-size=1 and max-size=5 without the patch:
@Component
public class R2DBCOptionsCustomizer implements ConnectionFactoryOptionsBuilderCustomizer {
    @Override
    public void customize(ConnectionFactoryOptions.Builder builder) {
        LoopResources loopResources = LoopResources.create("custom", -1, 5, false, false);
        builder.option(MariadbConnectionFactoryProvider.LOOP_RESOURCES, loopResources);
    }
}

Results:

2023-06-14T11:47:43.876+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : all loaded2
2023-06-14T11:47:43.876+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : all loaded1
2023-06-14T11:47:43.876+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:43.876+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:45.224+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : all loaded4
2023-06-14T11:47:45.225+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:45.225+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : all loaded3
2023-06-14T11:47:45.225+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:46.458+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : all loaded5
2023-06-14T11:47:46.458+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:46.463+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : all loaded6
2023-06-14T11:47:46.463+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:47.723+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : all loaded7
2023-06-14T11:47:47.724+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:47.734+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : all loaded8
2023-06-14T11:47:47.734+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:49.114+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : all loaded9
2023-06-14T11:47:49.114+02:00  INFO 14244 --- [   custom-nio-1] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:49.124+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : all loaded10
2023-06-14T11:47:49.124+02:00  INFO 14244 --- [   custom-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:47:49.124+02:00  INFO 14244 --- [nio-8080-exec-2] com.mariadb.todo.services.TaskService    : Elapsed time:7066
  • interestingly, using initial-size=5 and max-size=5 without the patch, but with a hacked reactor-pool where warmup is disabled, then we don't observe the issue anymore:
2023-06-14T11:40:05.216+02:00  INFO 10729 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : all loaded4
2023-06-14T11:40:05.216+02:00  INFO 10729 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:05.444+02:00  INFO 10729 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : all loaded3
2023-06-14T11:40:05.444+02:00  INFO 10729 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:05.451+02:00  INFO 10729 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : all loaded5
2023-06-14T11:40:05.451+02:00  INFO 10729 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:05.485+02:00  INFO 10729 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : all loaded2
2023-06-14T11:40:05.485+02:00  INFO 10729 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:05.541+02:00  INFO 10729 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded1
2023-06-14T11:40:05.541+02:00  INFO 10729 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:07.156+02:00  INFO 10729 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : all loaded6
2023-06-14T11:40:07.156+02:00  INFO 10729 --- [actor-tcp-nio-5] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:07.274+02:00  INFO 10729 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : all loaded7
2023-06-14T11:40:07.274+02:00  INFO 10729 --- [actor-tcp-nio-4] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:07.326+02:00  INFO 10729 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : all loaded9
2023-06-14T11:40:07.326+02:00  INFO 10729 --- [actor-tcp-nio-3] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:07.336+02:00  INFO 10729 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : all loaded8
2023-06-14T11:40:07.336+02:00  INFO 10729 --- [actor-tcp-nio-6] com.mariadb.todo.services.TaskService    : num:1092818
2023-06-14T11:40:07.393+02:00  INFO 10729 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : all loaded10
2023-06-14T11:40:07.393+02:00  INFO 10729 --- [actor-tcp-nio-2] com.mariadb.todo.services.TaskService    : num:1092818

elapsed time=4471

So, for the moment, before going ahead on implementing the feature request from this current issue, I will need to first fully understand the problem and I'm currently focusing on the reactor-pool.

@pderop
Copy link
Contributor

pderop commented Jun 22, 2023

@mp911de ,

Before going ahead on implementing your change request, could you please have a look at reactor/reactor-pool#171 ?
Indeed, it might resolve the issue without having to create any custom LOOP_RESOURCES without colocation.

will you be able to test it ? it is for reactor-pool 1.0.1, so it will require reactor-netty 1.1.x.
I validated it using mariadb, the r2dbc/r2dbc-pool#190 (comment) use case seems to be resolved, and also did some Gatling benchmarks in order to verify that all TcpResource eventloop threads are used (I summarized everything in the reactor-pool 171 issue).

let me know ?

thanks

@pderop
Copy link
Contributor

pderop commented Jun 23, 2023

@mp911de ,

so as promised there is a new PR which proposes a way to disable colocation for any existing LoopResources (PR 2842). Can you take a look ?

(However, I think that it's also worth to consider the reactor-pool PR reactor/reactor-pool#171, which might resolve the problem.

let me know, thanks !

@mp911de
Copy link
Author

mp911de commented Jun 26, 2023

These changes look exciting. I think R2DBC drivers will default to disabled colocation once the patch is out. I've seen also the Pool change and it makes totally sense to allocate connections in parallel.

@violetagg
Copy link
Member

violetagg commented Jun 30, 2023

@mp911de

A customized LoopResources instance for MariaDB
A customized LoopResources instance for Postgres
These would effectively create three EventLoopGroups with three times the number of Threads imposing additional resource usage.

Isn't there an API with which you can do this

LoopResources loop = ...
// Configure MariaDB with loop created above
// Configure Postgres with loop created above

Also if you have the fix in the Reactor Pool then you don't need to disable the colocation.
I think that the user has to decide what LoopResources to use and not the driver itself.

@mp911de
Copy link
Author

mp911de commented Jun 30, 2023

Ideally, we can get hold of the default event loops and have an instance with disabled colocation (something along the lines of LoopResources.getDefault().disableColocation()) to avoid excessive thread pool creation.

@violetagg
Copy link
Member

violetagg commented Jun 30, 2023 via email

@violetagg
Copy link
Member

violetagg commented Jun 30, 2023

About LoopResources.getDefault().disableColocation(), what does this API do if you have the code below?

TcpResources.set(useNative -> new NioEventLoopGroup());
TcpClient.create().runOn(useNative -> new NioEventLoopGroup());

@mp911de
Copy link
Author

mp911de commented Jun 30, 2023

We can resort to using native transports (e.g. sockets for Postgres), so we would need to add that logic back into our code. Also, we would create additional thread pools instead of reusing existing ones.

@pderop pderop added status/declined We feel we shouldn't currently apply this change/suggestion and removed type/enhancement A general enhancement labels Jul 10, 2023
@pderop
Copy link
Contributor

pderop commented Jul 10, 2023

I'm closing this issue, because after some further evaluation and consideration, it turns out that the problem this GH issue aims to resolve already has multiple existing solutions that can address the issue at hand.

The initial problem is that during R2DBC warmup, if minSize > 0, then one single TCP event loop will actually handle all DB connections. This is because the default TCP LoopResources is using colocation.
And the root cause of the issue actually comes from the reactor-pool, which, during warmup, sequentially subscribes to the first DB connection, then wait for it to complete before subscribing to the next one. This leads to a situation where during warmup, a first DB connection is created using a random TcpResource EventLoop thread (if the current thread is not a TCP event loop), and once this first DB Connection completes (from the TCP event loop), then the pool will then acquire another DB Connection ... but from the TcpResource thread ; so in this case, colocation will then reuse the same TcpResource event loop thread for all remaining DB connections to be acquired during warmup process.

So, the existing options which can address the problem without having to create additional threads with a custom LoopResources/colocation=OFF are the following:

  • option 1: in reactor pool GH issue 171 (Optimize pool warmup reactor-pool#171), we have added a new sizeBetween(int min, int max, int warmupParallelism) method, which allows to ensure that , during warmup, all resources are eagerly acquired from the current HTTP thread (hence , in this case, colocation won't take place). It means that it can be considered to make an enhancement in the r2dbc-pool library, in order to invoke the sizeBetween method with warmupParallelism=min.

The following patch could be considered in the r2dbc-pool ConnectionPool:

        if (maxSize == -1 || initialSize > 0) {
            int min = Math.max(configuration.getMinIdle(), initialSize);
            builder.sizeBetween(min, maxSize == -1 ? Integer.MAX_VALUE : maxSize, Math.max(min, 1));
        } else {
            int min = Math.max(configuration.getMinIdle(), initialSize);
            builder.sizeBetween(min, maxSize, , Math.max(min, 1));
        }

If the current thread is not TCP (which is the case in a typical HTTP web flux application), then the above proposed patch ensures that all pre-allocated DB resources will be acquired from the current HTTP thread, meaning we won't have the colocation issue anymore.
The PR 171 is merged and the feature will be available in next reactor-pool 1.0.1 version (will be part of 2022.0.9).

  • Option 2: another approach (which I think is safer), is to do a patch in r2dbc-pool in order to take control on the thread that is used when subscribing to the r2dbc connection allocator. The following proposed patch in ConnectionPool ensures that the "single" scheduler is always used when subscribing to the allocator, which resolves the problem (because the patch ensures that the current subscribing thread is not a colocated TCP event loop):
        PoolBuilder<Connection, PoolConfig<Connection>> builder = PoolBuilder.from(allocator.subscribeOn(Schedulers.single()))
            .clock(configuration.getClock())
            .metricsRecorder(metricsRecorder)
            .evictionPredicate(evictionPredicate)
            .destroyHandler(Connection::close)
            .idleResourceReuseMruOrder(); // MRU to support eviction of idle
  • Option 3: Using the current reactor-netty API, it is actually already possible to do something (but we do not recommend it): you can wrap an existing LoopResources into a LoopResources wrapper that can disable colocation, like this:
@Component
public class R2DBCOptionsCustomizer implements ConnectionFactoryOptionsBuilderCustomizer {
    @Override
    public void customize(ConnectionFactoryOptions.Builder builder) {
        LoopResources wrapped = (useNative) -> TcpResources.get().onServer(useNative);
        builder.option(MariadbConnectionFactoryProvider.LOOP_RESOURCES, wrapped);
    }
}

The above will wrap the existing default TCP loop resources, but the wrapped LoopResources will delegate its "onClient" call to the "onServer" callback of the actual default TCP LoopResources, meaning colocation will be disabled. However, we do not recommend this, mainly because doing this would mean we can now have an inconsistent situation where the default TCP event loop is sometimes colocated, and some other times not colocated, so that's inconsistent and it would be difficult to diagnose DEBUG logs (see previous comments from #2781 (comment)). Moreover, disposing the wrapped LoopResources would have no effect (because by default LoopResources dispose methods are noop), so that would be inconsistent too.

All in all, my opinion is that the option 2 safely resolves the problem and it even does not need the enhancement from the reactor-pool 171 issue.

@pderop pderop closed this as completed Jul 10, 2023
@pderop pderop added for/reactor-pool This belongs to the Reactor Pool project and removed status/declined We feel we shouldn't currently apply this change/suggestion labels Jul 10, 2023
@mp911de
Copy link
Author

mp911de commented Jul 11, 2023

Thanks a lot for your guidance and the time you spent here. Using warmup parallelism in combination with allocator.subscribeOn(…) seems the best approach for the time being.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for/reactor-pool This belongs to the Reactor Pool project
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants