Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set default resource requests & limit in helm chart #753

Open
nmehlei opened this issue Sep 25, 2024 · 2 comments
Open

Set default resource requests & limit in helm chart #753

nmehlei opened this issue Sep 25, 2024 · 2 comments

Comments

@nmehlei
Copy link

nmehlei commented Sep 25, 2024

Hello,

The helm chart is currently not setting resource requests & limits for the various deployments (e.g. alloy, kepler, opencost, prometheus-node-exporter, etc.). This can result in unbounded resource usage.

Should the chart set reasonable values here?

Alternatively, if for any reason this should not be set by default, then it would be useful to have an example on how to set these limits for each of the deployments within the chart values/parameters.

@skl
Copy link
Collaborator

skl commented Sep 25, 2024

At least for Alloy, the memory consumption is roughly proportional to the number of metric series (8-12 KiB RAM/series + whatever minimum is required by Alloy) - which in turn depends on the size of your cluster and the number of Alloy replicas in the StatefulSet.

OpenCost is likely to be affected by the size of the cluster, too.

The DaemonSets might be easier to predict, though.

I agree that it's a good idea to set a default and/or have an example 👍

@RobFone
Copy link

RobFone commented Nov 13, 2024

I would personally prefer to see guidance in the documentation. For example the information in the previous post about memory-consumed-per-series.
Resources depend so much on the environment and the node size (for CPU resource where the unit is a proportion of a core) that choosing meaningful defaults feels virtually impossible. If a default memory limit is set too low then in some environments that could even lead to pods getting stuck restarting where they get terminated for being over their memory limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants