You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Please check that this feature request hasn't been suggested before.
I searched previous Ideas in Discussions didn't find any similar feature requests.
I searched previous Issues didn't find any similar feature requests.
🔖 Feature description
Allow for training in the model parallel mode when there is more than one node involved.
Specifically, allow the model to be split sequentially over multiple GPUs in the case when there is more than one node present in the system.
This will allow for training large models across multiple nodes, in cases where a person cannot fit all of the required VRAM on a single machine, whether by hardware or space constraints.
It seems that the solution could be as simple as triggering the model parallel state when WORLD_SIZE > 1 with a configurable value, either in the config yaml file or passed via cli.
❓ Alternatives
No response
📝 Additional Context
No response
Acknowledgements
My issue title is concise, descriptive, and in title casing.
I have searched the existing issues to make sure this feature has not been requested yet.
I have provided enough information for the maintainers to understand and evaluate this request.
The text was updated successfully, but these errors were encountered:
I tried to implement this and discovered that some custom work will need to be done to split the model across nodes, and then ferry data back and forth. Accelerate supports multinode with mpi-like operators, but the device_map does not support multi-node.
🔖 Feature description
Allow for training in the model parallel mode when there is more than one node involved.
Specifically, allow the model to be split sequentially over multiple GPUs in the case when there is more than one node present in the system.
This will allow for training large models across multiple nodes, in cases where a person cannot fit all of the required VRAM on a single machine, whether by hardware or space constraints.
✔️ Solution
From these PRs:
#816
#538
It seems that the solution could be as simple as triggering the model parallel state when
WORLD_SIZE
> 1 with a configurable value, either in the config yaml file or passed via cli.❓ Alternatives
No response
📝 Additional Context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: