-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't output nvidia-smi failure in automated platform search #693
Comments
Ok, I think I understand the confusion even though this lives at the At the time this was implemented we wanted to make sure we have some place in the log that reports the GPUs found in the system, for debugging purposes. We were having cases where the simulation was falling back to run on CPU because there was some problem accessing the GPU/CUDA devices, so we just wanted to make sure the devices were available/found. On the other hand, I think we could try getting the exit code of that call to |
Yes, I think checking the error code openmmtools/openmmtools/multistate/multistatesampler.py Lines 1774 to 1780 in abd4011
|
Actually all we need to do is capture the output of the os.popen call and not have it dump to std err |
I can fix this |
@ijpulidos Why do we split by comma? |
We've had folks a bit confused about this message showing up when they don't have a GPU device:
Is there a way to avoid outputting this to users?
The text was updated successfully, but these errors were encountered: