You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I ran through your program. Why is it that only the lora weights are saved at the end of the fine tuning, the promt encoder and mask decoder downstream of sam are HOT parameters, don't they need to be saved?
In your inference_eval.py also only the lora parameters are loaded, not considering the parameters downstream of sam at all!
The text was updated successfully, but these errors were encountered:
For the training checkpoints, you can read in the readme "limitation" part. It is explained why I chose to save at the end.
For the loading of weights, I am freezing all weights of SAM (loading a SAM model is enough). Therefore I only need to load the LoRA checkpoint.
Hi, I ran through your program. Why is it that only the lora weights are saved at the end of the fine tuning, the promt encoder and mask decoder downstream of sam are HOT parameters, don't they need to be saved?
In your inference_eval.py also only the lora parameters are loaded, not considering the parameters downstream of sam at all!
The text was updated successfully, but these errors were encountered: