Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
When wrapping the full Transformer `Block`, FSDP wraps both trainable and non-trainable parameters. This results in way higher memory consumption, making the memory savings from LoRA meaningless. By instead wrapping only `torch.nn.Linear` modules, we still make use of FSDP but avoid wrapping the LoRA layers.
- Loading branch information