-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] GPU not utilized #22
Comments
Did you run it in a docker container? If so, make sure that you use the option
|
Thank you for the information. I solved it but actually in the opposite direction: my cards are ada 6000, so I have to first update cuda to 11.8, which is the minimum version support ada gen card. Then I update jax with consulting jax-ml/jax#13570.
This memory warning has caused a crash in a previous run, so I consulted oligomer predictions and trimmed off the low confident region from input sequence before feature generation. Is there anything I missed that caused the large memory (GPU) usage? I thought 2135 residues is not absurdly large. |
You may try to reduce the MSA input size like to 5000:
Or use less number of structure templates such as 2 if necessary: Also, disable intermediate recycle metric calculations by If it runs successfully, try longer recycles such as 8 or above, which could give you a better model. |
Thank you! I can see that with those settings the OOM problem is alleviated. I also set TF_FORCE_UNIFIED_MEMORY=1 so that tf is not squeezing the VRAM at the same time, hopefully. |
I'm thinking of modifying the script so that the variables that can potentially contribute to different prediction results can be tested sequentially and automatically. Would you mind pointing out a list of variables, including modes, presets etc. that should be included for a batch test? Thank you |
Use For some challenging cases, the odds of getting a good model could be really small, like < 1%. But if you have enough computing resources and keep trying, you could be rewarded with a surprising success. |
First thanks for this great resource! I encountered a problem that my GPU is not utilized.
I configured af2complex in the same conda env as the AlphaFold. I run examples and my own complex predictions with no problem, except that it seems the GPU is not utilized.
May I know how to get the GPU into play?
The text was updated successfully, but these errors were encountered: