-
Notifications
You must be signed in to change notification settings - Fork 431
Open
Description
Does textattack allow running the attack calculation on a single model doing distributed inference across multiple GPUs?
For example, one can use the argument device_map="auto"
, which distributes a large model to multiple GPUs, on HuggingFaceModelWrapper
. However, it seems that if you split a single instance of a large model onto multiple GPUs, then when doing the attack, such as using attacker.attack_dataset()
, then there will be a RuntimeError similar to the following (if two GPUs are present)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
Metadata
Metadata
Assignees
Labels
No labels