You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When downloading very large artifacts, the argo-server pod can go OOM since we load the entire file into memory before serving it to the user. Rather than loading the file into memory, we can just use io.Copy to serve the file from disk to the response directly.
Diagnostics
What Kubernetes provider are you using?
k3d
What version of Argo Workflows are you running?
latest/edge
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered:
…facts
When serving very large artifacts, first loading them into memory can potentially
cause the pod to go OOM/crash depending on how much memory is available and what
limits have been set. Rather than loading it into memory, we can serve files
directly from disk.
Fixesargoproj#4588
Signed-off-by: Daniel Herman <dherman@factset.com>
Summary
When downloading very large artifacts, the
argo-server
pod can go OOM since we load the entire file into memory before serving it to the user. Rather than loading the file into memory, we can just useio.Copy
to serve the file from disk to the response directly.Diagnostics
What Kubernetes provider are you using?
k3d
What version of Argo Workflows are you running?
latest/edge
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: