We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When I run run_app.py, I got a exception: I am not run run_app.py on Tesla v100. I only have 16G GPU memory, how can I make it run successfully.
run_app.py
In addition I have adjust PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:21, it still cannot finish.
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:21
The text was updated successfully, but these errors were encountered:
Hi, Thank you for your interest. In this work, we only tested on V100 GPUs with 32GB memory.
If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15. However, it's out of the scope of our discussion of the current EditGAN project.
Sorry, something went wrong.
No branches or pull requests
When I run

run_app.py
, I got a exception:I am not run
run_app.py
on Tesla v100. I only have 16G GPU memory, how can I make it run successfully.In addition I have adjust
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:21
, it still cannot finish.The text was updated successfully, but these errors were encountered: