-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v1.3.0 #1268
Comments
I think gpu_detect is probably better left to HPC testing. We could mock something to check the output is received correctly, but it seems more important to test that we are interacting with the GPU detection packages correctly, by doing it for real. |
Those lines in genfunc init were set up for |
Yeah, I believe back-in-the-day this had something to do with windows testing, and any other circumstances where the set of optimizers wasn't staying alive in memory. If I recall correctly this was also a solution to trying to have all our imports at the top of the file instead of in local-optimization subprocesses, trying to keep flake8 happy. My hunch is this should go away, or get dramatically refactored. We've discussed before that perhaps each local optimizer really ought to have their own imports, but I think PETSc was an odd one out when we tried that |
In node_resources, the Actually I can mock it. But it still does not happen currently as there is not a scheduler which detects both CPUs and GPUs via environment variable, but there might be in the future. |
Target: May 01 2024
Before release:
Features:
Possible:
on_abort
,on_cleanup
method_adjust_procs
gpu_detect but not if that's difficult to test/mock.Same for tcp_mgrcores_info is None
case in node_resourcesThe text was updated successfully, but these errors were encountered: