Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running on GPU #140

Open
zhj2022 opened this issue Dec 1, 2023 · 5 comments
Open

Running on GPU #140

zhj2022 opened this issue Dec 1, 2023 · 5 comments

Comments

@zhj2022
Copy link

zhj2022 commented Dec 1, 2023

The paper "gCastle: A Python Toolbox for Causal Discovery" claims that "gCastle includes ... with optional GPU acceleration". However, I don't know how GPU acceleration can be used on this package. Can you give me an example of its usage?

@shaido987
Copy link
Collaborator

Hello,

Whether GPU acceleration is possible or not depends on the algorithm. In general, if the algorithm is gradient-based then you can run it on a GPU, i.e., these algorithms: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle/castle/algorithms/gradient. This is done by setting device_type='gpu' and optionally setting the device_ids parameter. So, with the RL algorithm as an example:

from castle.algorithms import RL

rl = RL(device_type='gpu', device_ids=0)

@zhj2022
Copy link
Author

zhj2022 commented Dec 6, 2023

Hello,

Whether GPU acceleration is possible or not depends on the algorithm. In general, if the algorithm is gradient-based then you can run it on a GPU, i.e., these algorithms: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle/castle/algorithms/gradient. This is done by setting device_type='gpu' and optionally setting the device_ids parameter. So, with the RL algorithm as an example:

from castle.algorithms import RL

rl = RL(device_type='gpu', device_ids=0)

Thanks for your reply!
I have tried running the algorithm on one gpu following your description and finally succeeded. However, I can't make it on two gpus. I guessed the proper way should be setting the argument "device_ids" as "0, 1", according to the source code, but it just didn't work.

@shaido987
Copy link
Collaborator

You are correct. The input will directly be used for setting the CUDA_VISIBLE_DEVICES environment variable so setting "0, 1" should work.

Following https://discuss.pytorch.org/t/os-environ-cuda-visible-devices-not-functioning/105545, you can try to set it yourself before doing any other imports at the top of the script (and just keep device_ids as the default). It could be that the setting is not taking effect correctly.

@zhj2022
Copy link
Author

zhj2022 commented Dec 9, 2023

Yes, setting the environment variable CUDA_VISIBLE_DEVICES on the top can help me determine which gpu to use. For example, if I have a machine on which two gpus are available, the command os.environ["CUDA_VISIBLE_DEVICES"] = "1" or os.environ["CUDA_VISIBLE_DEVICES"] = "1, 0" can make the program run on cuda:1.

However, the source code of the algorithm have the following commands:

if self.device_type == 'gpu':
    if self.device_ids:
        os.environ['CUDA_VISIBLE_DEVICES'] = str(self.device_ids)
    device = torch.device('cuda')
else:
    device = torch.device('cpu')

In this way, it seems that the only thing I can do is decide which gpu to use, while training a single model on multiple gpus is not feasible.

@shaido987
Copy link
Collaborator

Can you help confirm if running the following works for multiple gpus? I.e., setting CUDA_VISIBLE_DEVICES before the import as in the link above?

os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1"

from castle.algorithms import RL
rl = RL(device_type='gpu', device_ids=None)

The above should run on both gpu 0 and 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants