A small Python library that waits for GPU conditions to be satisfied, and then
setting CUDA_VISIBLE_DEVICES
to the qualifying GPU on multi-GPU systems.
pip install waitGPU
Use this library before any import that will use a GPU like torch
or
tensorflow
.
import waitGPU
waitGPU.wait(utilization=50, memory_ratio=0.5, available_memory=300,
gpu_ids=[1,2], interval=10, nproc=1, ngpu=1)
Specifying keyword arguments to wait
will determine the criteria to wait for:
utilization
will wait until GPU utilization is at most the given valuememory_ratio
will wait until the GPU memory utilization is at most the given valueavailable_memory
will wait until the available memory is at least the given valuegpu_ids
will only consider GPUs with the given IDsinterval
is the number of seconds to wait before checking conditionsnproc
will wait until the number of processes running on each GPU is at most the given valuengpu
will wait until the number of GPUs that satisfy all other criteria is at least the given value (default: 1)
- Jongwook Choi's gpustat library.
This code is in the public domain.