Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory #1

Closed
o98k-ok opened this issue Feb 21, 2022 · 1 comment
Closed

RuntimeError: CUDA out of memory #1

o98k-ok opened this issue Feb 21, 2022 · 1 comment

Comments

@o98k-ok
Copy link

o98k-ok commented Feb 21, 2022

When I run run_app.py, I got a exception:
image
I am not run run_app.py on Tesla v100. I only have 16G GPU memory, how can I make it run successfully.

In addition I have adjust PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:21, it still cannot finish.

@arieling
Copy link
Collaborator

arieling commented Feb 22, 2022

Hi, Thank you for your interest. In this work, we only tested on V100 GPUs with 32GB memory.

If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15.
However, it's out of the scope of our discussion of the current EditGAN project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants