You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
High memory usage (>50Gi) when scraping Prometheus metrics in EKS on EC2 cluster using cloud watch agent. Our cluster have below resources and the agent memory limit set to 50Gi and getting OOMKilled in every 5 minutes.
Resources
Count
pods
429
namespaces (99% empty)
57776
endpoints
255
services
254
Steps to reproduce
Deploy cloud watch agent as a K8 deployment resource with below configurations in out cluster
What did you expect to see?
The expectation is that agent may use low (<10Gi) memory.
What did you see instead?
A very high memory usage(~60Gi)
What version did you use?
cloudwatch-agent:1.300046.0b833
Environment
OS: Amazon Linux 2 - 5.10.224-212.876.amzn2.x86_64
The text was updated successfully, but these errors were encountered:
nar-git
changed the title
High memory usage (>60Gi) when scraping Prometheus metrics
High memory usage (>50Gi) when scraping Prometheus metrics
Sep 30, 2024
Describe the bug
High memory usage (>50Gi) when scraping Prometheus metrics in EKS on EC2 cluster using cloud watch agent. Our cluster have below resources and the agent memory limit set to 50Gi and getting OOMKilled in every 5 minutes.
Steps to reproduce
Deploy cloud watch agent as a K8 deployment resource with below configurations in out cluster
What did you expect to see?
The expectation is that agent may use low (<10Gi) memory.
What did you see instead?
A very high memory usage(~60Gi)
What version did you use?
cloudwatch-agent:1.300046.0b833
Environment
OS: Amazon Linux 2 - 5.10.224-212.876.amzn2.x86_64
The text was updated successfully, but these errors were encountered: