Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Memory leak when passing images of different dimensions with MXNET_CUDNN_AUTOTUNE_DEFAULT #12662

Closed
Ishitori opened this issue Sep 25, 2018 · 11 comments

Comments

@Ishitori
Copy link
Contributor

Description

I have noticed, that if I use MXNET_CUDNN_AUTOTUNE_DEFAULT=1 with big image dimensions (1900x1900), then after a forward pass a lot of GPU memory got consumed and never released. Autotune gets Out of Memory exception, if I try to pass another image after the first one with also big, but different dimensions (1800x1800).

The image dimensions are smaller, so my assumption is that since 1900x1900 got processed then 1800x1800 should also be processed, because it takes less memory. But it is actually not the case, because after the first image processing some of the GPU memory is not released.

The main question for me is why GPU memory is not released once the first image is processed? It seems like something is holding it. I think there is a memory leak or some sort of cache, which is never released.

Environment info (Required)

----------Python Info----------
Version      : 3.6.4
Compiler     : GCC 7.2.0
Build        : ('default', 'Jan 16 2018 18:10:19')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 9.0.1
Directory    : /home/ubuntu/anaconda3/lib/python3.6/site-packages/pip
----------MXNet Info-----------
/home/ubuntu/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprec$
ted. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Version      : 1.3.0
Directory    : /home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet
Commit Hash   : b3be92f4a48bce62a5a8424271871c2f81c8f7f1
----------System Info----------
Platform     : Linux-4.4.0-1066-aws-x86_64-with-debian-stretch-sid
system       : Linux
node         : ip-172-31-22-61
release      : 4.4.0-1066-aws
version      : #76-Ubuntu SMP Thu Aug 16 16:21:21 UTC 2018
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2699.984
CPU max MHz:           3000.0000
CPU min MHz:           1200.0000
BogoMIPS:              4600.07
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-31
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good
nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3
dnowprefetch invpcid_single kaiser fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0024 sec, LOAD: 0.5097 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0004 sec, LOAD: 0.3562 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0003 sec, LOAD: 0.3587 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0003 sec, LOAD: 0.1460 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0023 sec, LOAD: 0.0745 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0003 sec, LOAD: 0.0240 sec.

Package used (Python/R/Scala/Julia):
Python

Error Message:

Traceback (most recent call last):
  File "main.py", line 71, in <module>
    print(transform_fn(net, args.b, args.h, args.w))
  File "main.py", line 54, in transform_fn
    data_out = net(data_in).asnumpy()
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 1972, in asnumpy
    ctypes.c_size_t(data.size)))
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/base.py", line 252, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [02:19:57] src/operator/nn/./cudnn/cudnn_convolution-inl.h:870: Failed to find any forward convolution algorithm.  with workspace size of 1073741824 bytes, please consider reducing batch/model size or increasing the workspace size

Stack trace returned 10 entries:
[bt] (0) /home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x36161a) [0x7f3ddb36761a]
...

Minimum reproducible example

import os
import argparse

#os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = "0"

from mxnet import nd, gluon
import mxnet as mx
from mxnet.gluon import nn
import json

ctx = mx.gpu(0)

def create_model():
    net = gluon.nn.HybridSequential()
    with net.name_scope():
        net.add(nn.Conv2D(64, 5))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(64, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(96, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(96, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(128, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(128, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(256, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(128, 3))
        net.add(nn.LeakyReLU(0.1))
        net.add(nn.Conv2D(1, 3))
    return net


def model_fn():
    net = create_model()
    net.hybridize()
    net.initialize(mx.init.Normal(sigma=0.01), ctx=ctx)
    return net


def transform_fn(net, batch_size=1, height=500, width=500):
    data_in = nd.random_uniform(low=0, high=255, shape=(batch_size, 3, height, width), ctx=ctx, dtype="float32")

    data_out = net(data_in).asnumpy()
    return data_out

parser = argparse.ArgumentParser(description='Memory consumption checker')
parser.add_argument('--h', type=int, default=500, help='Height of an image, default 500')
parser.add_argument('--w', type=int, default=500, help='Weight of an image, default 500')
parser.add_argument('--b', type=int, default=1, help='Batch_size, default 1')
args = parser.parse_args()
print(args)

net = model_fn()
mx.nd.waitall()

while True:
        args.h = int(input("Height: "))
        args.w = int(input("Width: "))
        print(transform_fn(net, args.b, args.h, args.w))

Steps to reproduce

  1. Run script on p2 instance
  2. On first prompt of image dimensions provide 1900 and 1900
  3. See amount of GPU memory in use once the forward pass is done and second prompt is displayed (in my case it was 5508 out of 11441)
  4. Enter image dimensions of 1800 and 1800
  5. See out of memory error

What have you tried to solve it?

Setting MXNET_CUDNN_AUTOTUNE_DEFAULT=0 seems to solve the problem. The inference time increases slightly, but memory seems like properly reused:

  1. After the first image is done (1900x1900), the consumption is almost the same: 5505/11441
  2. After the second image is done (1800x1800), the consumption gets to 10222/11441, but processing doesn't fail
  3. I can even run 3rd image (1700x1700) and it gets processed fine and consumption drops to 4451/11441.
@vandanavk
Copy link
Contributor

@mxnet-label-bot [Bug, Memory]

@larroy
Copy link
Contributor

larroy commented Oct 3, 2018

I think we discovered this before. Somehow I can't find the ticket, but I did some memory consumption graphs in the Jetson board which indicated a leak in Cuda / cudnn. We notified NVidia about this before. Maybe @DickJC123 has some more info.

@larroy
Copy link
Contributor

larroy commented Oct 3, 2018

Is the memory released across executions of the script?

@larroy
Copy link
Contributor

larroy commented Oct 3, 2018

@Ishitori
Copy link
Contributor Author

Ishitori commented Oct 5, 2018

@larroy, when searching for the best algorithm the memory consumption spikes, and once it is found it goes down, but not to the state it was before the search started.

@DickJC123
Copy link
Contributor

I have an explanation but I'll have to think about the best fix. The problem starts with the fact that cudnnFind() does its own workspace allocations and doesn't use MXNet's memory allocator. MXNet anticipates this by setting up a 'headroom' via MXNET_GPU_MEM_POOL_RESERVE (a percentage of total memory). I was able to run your script with repeated allocations on a 16GB GPU by setting MXNET_GPU_MEM_POOL_RESERVE=35. On a 12GB GPU, the corresponding value would be 47!! That's clearly excessive so we might have to resort to calling the 'Ex' flavor of cudnnFind, which allows for pre-screening of algos that have a workspace greater than the threshold set by the convolution instance 'workspace' param.

@DickJC123
Copy link
Contributor

I had been working up a PR with a number of improvements to the way MXNet uses cudnnFind. In the PR's new serialized approach to running cudnnFind, a fix to your problem became possible. Please see #12804

@larroy
Copy link
Contributor

larroy commented Oct 27, 2018

Can this be closed due to the previous fix by @DickJC123 ?

@Ishitori
Copy link
Contributor Author

Ishitori commented Nov 2, 2018

Yes, I consider the issue fixed. I tried to reproduce it with --pre version of mxnet, using the same methodology and it doesn't fail anymore.

@Ishitori Ishitori closed this as completed Nov 2, 2018
@larroy
Copy link
Contributor

larroy commented Nov 9, 2018

How did you measure the memory? Do you think it makes sense to add a regression test?

@DickJC123
Copy link
Contributor

As part of #12804, I took the minimal example code provided here and turned it into the initially failing unittest test_gluon_gpu.py:test_large_models. To make the test universal, I had its operation scale with the amount of available memory, as provided by a new Python API addition gpu_memory_info():

python -c "import mxnet; print(mxnet.context.gpu_memory_info(0))"

After CI showed the test failing, I pushed the fix that corrected the issue. So in summary, we have a new function to measure memory, and a new unittest for this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants