Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce ETSCronFlusher memory consumption #199

Closed
wants to merge 1 commit into from

Conversation

IIILSW
Copy link

@IIILSW IIILSW commented Apr 27, 2023

Change description

We have registered high memory consumption by the ETSCronFlusher process heap and large number of attached ref binaries.

In the course of our experiments, abandoning the use of Exporter and placing the process in hibernate after cleanup allowed us to reduce memory consumption under load by ~70mb

What problem does this solve?

Issue number: #198

Example usage

Additional details and screenshots

For now using telemetry_metrics_prometheus_core from my fork cause it also requires changes

Checklist

  • I have added unit tests to cover my changes.
  • I have added documentation to cover my changes.
  • My changes have passed unit tests and have been tested E2E in an example project.

@akoutmos
Copy link
Owner

akoutmos commented May 5, 2023

Good catch and next level debugging!! Do you think that you can open a PR to the prometheus_metrics_core lib with your changes? Else, another option would be to run the actual flushing inside of a Task. Once that Task terminates, the heap memory will be reclaimed. That way you can avoid the prometheus_metrics_core PR.

Thoughts?

@akoutmos
Copy link
Owner

akoutmos commented May 6, 2023

Can you test out this branch and see if it fixes you problem? #200

@IIILSW
Copy link
Author

IIILSW commented May 6, 2023

Thank you for your reply! I tested this approach yesterday and it worked fine. Before answering, I was going to assemble a more productive stand and investigate how large binary construction in TelemetryMetricsPrometheus.Core.Exporter will affect memory allocation. I'll test it and return with results.

Your suggestion seems reasonable, and it is at least a good first step that can be taken soon. Closed MR in prometheus_metrics_core, thank you again

@akoutmos
Copy link
Owner

akoutmos commented May 7, 2023

Sounds good. I'll close this PR for now and merge in #200 in that case. Thanks!

@akoutmos akoutmos closed this May 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants