You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Cryostat container uses the CRYOSTAT_REPORT_GENERATION_MAX_HEAP environment variable to limit the heap size (and therefore roughly limit, by some factor, the total memory footprint) of the subprocess spawned for automated analysis report generation. This is used to ensure that the subprocess doesn't consume too many resources and cause the parent process or the whole pod to be OOM killed. Cryostat itself doesn't know what its own memory limits are - though this may be determinable from within the container by querying the filesystem for cgroups etc., or possibly available from JDK library calls, these are not always present or reliable across different deployment platforms.
to determine the memory limits applied to the Cryostat container that it will create and set the CRYOSTAT_REPORT_GENERATION_MAX_HEAP environment variable accordingly.
We will need to do some profiling of long-lived Cryostat deployments to determine how much memory the main Cryostat process itself requires, and some profiling to determine how much of the remaining memory can be allocated to the subprocess' heap space, such that the total memory footprint of both processes does not exceed the hard memory limit applied to the container.
The text was updated successfully, but these errors were encountered:
This is mostly obsoleted by #328, and #335 would allow end users who specifically want to use a "fat" Cryostat deployment with subprocess generation to tune those characteristics manually. But, I think this issue still has some merit on its own, in that the Operator could/should do some basic heuristic setting of the subprocess max heap variable when there is no reports sidecar configuration. That heuristic sounds like it might be something as simple as max(100, containerMemoryLimitMB - 500).
The Cryostat container uses the
CRYOSTAT_REPORT_GENERATION_MAX_HEAP
environment variable to limit the heap size (and therefore roughly limit, by some factor, the total memory footprint) of the subprocess spawned for automated analysis report generation. This is used to ensure that the subprocess doesn't consume too many resources and cause the parent process or the whole pod to be OOM killed. Cryostat itself doesn't know what its own memory limits are - though this may be determinable from within the container by querying the filesystem for cgroups etc., or possibly available from JDK library calls, these are not always present or reliable across different deployment platforms.The Operator can use the LimitRange API:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
to determine the memory limits applied to the Cryostat container that it will create and set the
CRYOSTAT_REPORT_GENERATION_MAX_HEAP
environment variable accordingly.We will need to do some profiling of long-lived Cryostat deployments to determine how much memory the main Cryostat process itself requires, and some profiling to determine how much of the remaining memory can be allocated to the subprocess' heap space, such that the total memory footprint of both processes does not exceed the hard memory limit applied to the container.
The text was updated successfully, but these errors were encountered: