Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlockingSingleSubscriber causes memory leak #3371

Open
manoj-mathivanan opened this issue Mar 1, 2023 · 7 comments
Open

BlockingSingleSubscriber causes memory leak #3371

manoj-mathivanan opened this issue Mar 1, 2023 · 7 comments
Labels
for/user-attention This issue needs user attention (feedback, rework, etc...) type/bug A general bug warn/behavior-change Breaking change of publicly advertised behavior

Comments

@manoj-mathivanan
Copy link

manoj-mathivanan commented Mar 1, 2023

BlockingSingleSubscriber.java accumulates exception in suppressed exception list in the same error object which gets accumulated over time leading to memory leak.

Expected Behavior

New error object should be created and accumulation on old error object should be avoided.

Actual Behavior

Error object is cached internally in the kafka sender which is used as a parent error. Future errors while performing send operations which results in error are accumulated as suppressed exceptions inside the parent error.

Steps to Reproduce

Create a normal kafka sender. Introduce a config exception like below:

delivery.timeout.ms=1200
max.block.ms=1000
request.timeout.ms=2000
linger.ms=10

This will throw an error while creating a kafka sender - org.apache.kafka.common.config.ConfigException: delivery.timeout.ms should be equal to or larger than linger.ms + request.timeout.ms
Next, send some messages using the kafka sender.

kafkaSender.send(Mono.just(SenderRecord.create(
                        new ProducerRecord<>("topic", "test"),1)))
                .doOnError((Throwable throwable) -> {
              
                })
               .blockFirst();

Possible Solution

Your Environment

reactor-core:3.4.16
200 messages/sec creates 200 exceptions in the suppressed exception list. This keeps growing resulting in memory leak.

  • Reactor version(s) used: 3.4.16
  • Other relevant libraries versions (eg. netty, ...): reactor-kafka:1.3.11
  • JVM version (java -version): 1.8
  • OS and version (eg uname -a): ubuntu20
@reactorbot reactorbot added the ❓need-triage This issue needs triage, hasn't been looked at by a team member yet label Mar 1, 2023
@OlegDokuka OlegDokuka added type/bug A general bug and removed ❓need-triage This issue needs triage, hasn't been looked at by a team member yet labels Mar 2, 2023
@OlegDokuka OlegDokuka added this to the 3.4.28 milestone Mar 2, 2023
@OlegDokuka OlegDokuka removed the type/bug A general bug label Mar 2, 2023
@OlegDokuka
Copy link
Contributor

OlegDokuka commented Mar 2, 2023

@manoj-mathivanan do you have a pointer to an exception which is accumulating errors?

if it is out of the reactor-core then we can do nothing about it

@OlegDokuka OlegDokuka removed this from the 3.4.28 milestone Mar 2, 2023
@manoj-mathivanan
Copy link
Author

@OlegDokuka This is the place where the accumulation happens: https://github.com/reactor/reactor-core/blob/main/reactor-core/src/main/java/reactor/core/publisher/BlockingSingleSubscriber.java#L99 which is in reactor core.

@OlegDokuka
Copy link
Contributor

Right, but that should be okay unless propagated exception is a static one. That say it could be a problem of a different library

@manoj-mathivanan
Copy link
Author

@OlegDokuka, you are right. The propagated exception is always the same and the new errors are getting added to the same exception inside suppressed exception list.
Let me try to explain with a scenario.

I create a kafka sender in my bean with wrong configuration

SenderOptions<String, String> senderOptions = SenderOptions.<String, String>create(producerConfiguration);
KafkaSender kafkaSender = KafkaSender.create(senderOptions);

The above kafkaSender is initialized in the bean and is by default lazy loaded when the first message is sent.

Now when I try to send a message like below:

try{
   kafkaSender.send(Mono.just(SenderRecord.create(new ProducerRecord<>("topic", "test"),1))).blockFirst();
} catch(Exception ex){
   logger.error(ex.getMessage(),ex);
}

The same ex object is thrown everytime I use the above code to send message.
Since the application is started and running, it needs to send around 200 messages/sec.
Everytime the message is sent using the above code, the suppressed exception list keeps adding up inside the same ex object.

In 5 mins, 6K messages are trying to be sent, all the 6K messages end up in error and there are 6K suppressed exceptions in the list and every suppressed exception has completed stacktrace on its own. This memory keeps on growing.

In the above example, I have shown you an example of logging the errors.
Apart from the memory, the logs also gets filled by exponentialy cause every error has the details of previous errors.
If there is a new log appender, the issue get amplified even more.

@OlegDokuka OlegDokuka added type/bug A general bug for/user-attention This issue needs user attention (feedback, rework, etc...) labels Mar 6, 2023
@OlegDokuka
Copy link
Contributor

related #1872

@OlegDokuka
Copy link
Contributor

@manoj-mathivanan if we fix that on the reactor level now then it is going to be a breaking change. I'm not sure we can do it now safely. However, we can probably make it into 3.6.x//4.x. I

In the meanwhile, I suggest you use .onErrorMap() operator to explicitly map your error into a different representation to avoid classloader leak

@OlegDokuka OlegDokuka added the warn/behavior-change Breaking change of publicly advertised behavior label Mar 7, 2023
@OlegDokuka OlegDokuka added this to the 4.0.0 planning milestone Mar 7, 2023
@manoj-mathivanan
Copy link
Author

@OlegDokuka Thanks for picking it up to future release.
Thanks a lot for the suggestion too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for/user-attention This issue needs user attention (feedback, rework, etc...) type/bug A general bug warn/behavior-change Breaking change of publicly advertised behavior
Projects
None yet
Development

No branches or pull requests

3 participants