Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v1.3.0 #1268

Closed
10 of 19 tasks
shuds13 opened this issue Mar 20, 2024 · 4 comments
Closed
10 of 19 tasks

Release v1.3.0 #1268

shuds13 opened this issue Mar 20, 2024 · 4 comments
Assignees
Labels

Comments

@shuds13
Copy link
Member

shuds13 commented Mar 20, 2024

Target: May 01 2024

Before release:

  • Check documentation. Is gen_on_manager and change to default comms documented sufficiently.
  • Polaris build (conda module disappeared - still in Polaris docs) - see if need to update Polaris instructions (tested using cray-python module) Now also tested with conda module - need first use: module use /soft/modulefiles

Features:

Possible:

@shuds13 shuds13 self-assigned this Mar 20, 2024
@shuds13
Copy link
Member Author

shuds13 commented Mar 21, 2024

I think gpu_detect is probably better left to HPC testing. We could mock something to check the output is received correctly, but it seems more important to test that we are interacting with the GPU detection packages correctly, by doing it for real.

@shuds13
Copy link
Member Author

shuds13 commented Mar 22, 2024

Those lines in genfunc init were set up for rsopt. I sort of thought we had a test for it, but maybe it was in the windows testing. @jlnav ?

@jlnav
Copy link
Member

jlnav commented Mar 22, 2024

Those lines in genfunc init were set up for rsopt. I sort of thought we had a test for it, but maybe it was in the windows testing. @jlnav ?

Yeah, I believe back-in-the-day this had something to do with windows testing, and any other circumstances where the set of optimizers wasn't staying alive in memory. If I recall correctly this was also a solution to trying to have all our imports at the top of the file instead of in local-optimization subprocesses, trying to keep flake8 happy.

My hunch is this should go away, or get dramatically refactored. We've discussed before that perhaps each local optimizer really ought to have their own imports, but I think PETSc was an odd one out when we tried that

@shuds13
Copy link
Member Author

shuds13 commented Apr 4, 2024

In node_resources, the if _complete_set(cores_info): in get_sub_node_resources is not set as this would mean GPUs environment variable were detected, which is not set in any of the CI. We could have a unit test that sets SLURM_GPUS_ON_NODE env variable, but I'm a little cautious about setting meaningful environment variables in unit tests.

Actually I can mock it.

But it still does not happen currently as there is not a scheduler which detects both CPUs and GPUs via environment variable, but there might be in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

No branches or pull requests

2 participants