You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you very much for your work. I want to add some tunable parameters into the CLIP attention during the LLava fine-tuning process. These parameters have their requires_grad set to True and have been included in the optimizer_grouped_parameters. However, during training, I noticed that the gradients for these fine-tuning parameters in the optimizer are p.grad=None. Could you please advise how I should modify the project? Many thanks!
(Note: I have commented out the @torch.no_grad() decorator in the clip_encoder.py file; is there anything else that needs to be changed?)
The text was updated successfully, but these errors were encountered:
Question
First of all, thank you very much for your work. I want to add some tunable parameters into the CLIP attention during the LLava fine-tuning process. These parameters have their
requires_grad
set to True and have been included in theoptimizer_grouped_parameters
. However, during training, I noticed that the gradients for these fine-tuning parameters in the optimizer arep.grad=None
. Could you please advise how I should modify the project? Many thanks!(Note: I have commented out the
@torch.no_grad()
decorator in theclip_encoder.py
file; is there anything else that needs to be changed?)The text was updated successfully, but these errors were encountered: