You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Problem
Some of the Prometheus metrics generated by kubernetes_logs include the file name in the labels, resulting in a new label set created for each new pod. These metrics do not seem to be cleaned up, which leads to increasing CPU usage with time.
Below is the vector component utilization (vector_utilization metric):
Examples of leaking metrics are vector_files_resumed_total and vector_checksum_errors_total, but there are probably more.
A note for the community
Problem
Some of the Prometheus metrics generated by
kubernetes_logs
include the file name in the labels, resulting in a new label set created for each new pod. These metrics do not seem to be cleaned up, which leads to increasing CPU usage with time.Below is the vector component utilization (
vector_utilization
metric):Examples of leaking metrics are
vector_files_resumed_total
andvector_checksum_errors_total
, but there are probably more.Configuration
No response
Version
vector 0.21.2 (x86_64-unknown-linux-gnu 1f01009 2022-05-05)
Debug Output
No response
Example Data
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: