-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running on GPU #140
Comments
Hello, Whether GPU acceleration is possible or not depends on the algorithm. In general, if the algorithm is gradient-based then you can run it on a GPU, i.e., these algorithms: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle/castle/algorithms/gradient. This is done by setting from castle.algorithms import RL
rl = RL(device_type='gpu', device_ids=0) |
Thanks for your reply! |
You are correct. The input will directly be used for setting the Following https://discuss.pytorch.org/t/os-environ-cuda-visible-devices-not-functioning/105545, you can try to set it yourself before doing any other imports at the top of the script (and just keep |
Yes, setting the environment variable However, the source code of the algorithm have the following commands: if self.device_type == 'gpu':
if self.device_ids:
os.environ['CUDA_VISIBLE_DEVICES'] = str(self.device_ids)
device = torch.device('cuda')
else:
device = torch.device('cpu') In this way, it seems that the only thing I can do is decide which gpu to use, while training a single model on multiple gpus is not feasible. |
Can you help confirm if running the following works for multiple gpus? I.e., setting os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1"
from castle.algorithms import RL
rl = RL(device_type='gpu', device_ids=None) The above should run on both gpu 0 and 1. |
The paper "gCastle: A Python Toolbox for Causal Discovery" claims that "gCastle includes ... with optional GPU acceleration". However, I don't know how GPU acceleration can be used on this package. Can you give me an example of its usage?
The text was updated successfully, but these errors were encountered: