Skip to content

OOM issue #549

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
joepio opened this issue Dec 8, 2022 · 0 comments
Closed

OOM issue #549

joepio opened this issue Dec 8, 2022 · 0 comments

Comments

@joepio
Copy link
Member

joepio commented Dec 8, 2022

I still have an OOM issue, even after fixing #529. I'm running in production on low-end hardware (1VCPU, 512 MB ram), but it should have some ram to spare. I'd like to know what's causing this.

I tried running with miri, but it doesn't support Tokio at the moment. I could also try valgrind.

Since sleds default cache size is 1gb, maybe that's the problem? Because my server doesn't even have that amount of memory. I'll try using a somewhat decent machine.

Update: after a week of using a machine with 4GB ram, I haven't had any OOMs. The memory usage seems to hang at 448MB total, of which 370 is used by Atomic-Server. I don't think there's a memory leak.

@joepio joepio closed this as completed Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant