Skip to content

Commit

Permalink
add check for distributed optimizer which is unsupported for PEFT (NV…
Browse files Browse the repository at this point in the history
…IDIA#8323)

Signed-off-by: Chen Cui <chcui@nvidia.com>
  • Loading branch information
cuichenx authored Feb 6, 2024
1 parent f5362c4 commit d421814
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions nemo/collections/nlp/parts/mixins/nlp_adapter_mixins.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,8 @@ def add_adapter(self, peft_cfgs: Union[PEFTConfig, List[PEFTConfig]]):

if self.cfg.get('virtual_pipeline_model_parallel_size', None):
raise ValueError('Virtual pipeline model parallel is not supported when using PEFT')
if self.cfg.optim.name == "distributed_fused_adam":
raise ValueError('distributed_fused_adam is not supported for PEFT. Please use fused_adam')

if not isinstance(peft_cfgs, List):
peft_cfgs = [peft_cfgs]
Expand Down

0 comments on commit d421814

Please sign in to comment.