Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add uses a lot of ram and cpu #1222

Closed
whyrusleeping opened this issue May 11, 2015 · 3 comments
Closed

add uses a lot of ram and cpu #1222

whyrusleeping opened this issue May 11, 2015 · 3 comments
Labels
kind/bug A bug in existing code (including security flaws) topic/perf Performance

Comments

@whyrusleeping
Copy link
Member

an add operation on one of my servers (single core, 1GB ram) is taking all the CPU up, and as much ram as it can eat. watching memory usage from expvarmon, i see it allocating large amounts of memory, then running a GC, over and over again.

I suspect some of this is eventlog, i saw similar behaviour in bitswap due to eventlog, i will take a look at some profiles and try to get more info.

@whyrusleeping whyrusleeping added the kind/bug A bug in existing code (including security flaws) label May 11, 2015
@jbenet jbenet mentioned this issue May 19, 2015
52 tasks
@whyrusleeping whyrusleeping mentioned this issue May 26, 2015
49 tasks
@whyrusleeping whyrusleeping mentioned this issue Jun 2, 2015
58 tasks
@whyrusleeping whyrusleeping mentioned this issue Jun 9, 2015
50 tasks
@jbenet jbenet mentioned this issue Jun 16, 2015
55 tasks
@whyrusleeping
Copy link
Member Author

the RAM usage is primarily because we allocate memory for ever file read in, and while it does get 'released' the gc is slower than the allocator, causing large portions of your available RAM to be eaten. This can be helped by having a buffer pool like we do in go-msgio.

I havent noticed much CPU usage while testing, but as we get closer to the ~200MB/s limit of sha256 hashing, i beleive we will start cranking that.

The primary bottleneck right now as far as time though, is the datastore, we do directory syncs for every write and this slows us down a lot. If we could batch together multiple writes, it would save us a lot of time.

@davidar
Copy link
Member

davidar commented Aug 18, 2015

This bug is particularly bad on a smaller VPS, where ipfs add tends to trigger the OOM killer almost immediately.

@whyrusleeping
Copy link
Member Author

Add has been improved significantly since this. I'm gonna call it closed, open new issues for new add perf issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) topic/perf Performance
Projects
None yet
Development

No branches or pull requests

3 participants