Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

argo-server goes OOM when serving large artifacts #4588

Closed
dcherman opened this issue Nov 22, 2020 · 0 comments · Fixed by #4589
Closed

argo-server goes OOM when serving large artifacts #4588

dcherman opened this issue Nov 22, 2020 · 0 comments · Fixed by #4589
Labels

Comments

@dcherman
Copy link
Member

dcherman commented Nov 22, 2020

Summary

When downloading very large artifacts, the argo-server pod can go OOM since we load the entire file into memory before serving it to the user. Rather than loading the file into memory, we can just use io.Copy to serve the file from disk to the response directly.

Diagnostics

What Kubernetes provider are you using?

k3d

What version of Argo Workflows are you running?

latest/edge


Message from the maintainers:

Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.

dcherman added a commit to dcherman/argo that referenced this issue Nov 24, 2020
…facts

When serving very large artifacts, first loading them into memory can potentially
cause the pod to go OOM/crash depending on how much memory is available and what
limits have been set.  Rather than loading it into memory, we can serve files
directly from disk.

Fixes argoproj#4588

Signed-off-by: Daniel Herman <dherman@factset.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant