This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Memory builds up when creating size-zero NDArray in a loop #14358
Comments
Hey, this is the MXNet Label Bot. |
@mxnet-label-bot update [Bug, NDArray, CUDA] |
We still observe the same issue after changing context from mx.gpu(0) to mx.cpu(0). @mxnet-label-bot update [Bug, NDArray] |
@mxnet-label-bot add [Backend, Memory] |
Nice catch ! |
@anirudh2290 Could you please reopen this? The original fix has been reverted due to test flakiness. I am working on alternative fix. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Memory builds up when creating size-zero ndarray in a loop
Environment info (Required)
Package used (Python/R/Scala/Julia): Python
Error Message:
If you run watch -n5 nvidia-smi, you may observe memory growth every by 2MB every few seconds.
Minimum reproducible example
Steps to reproduce
(Paste the commands you ran that produced the error.)
What have you tried to solve it?
Related to #13951
The text was updated successfully, but these errors were encountered: