Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why sam not save when training ends. #9

Open
Ultraman6 opened this issue Oct 13, 2024 · 1 comment
Open

why sam not save when training ends. #9

Ultraman6 opened this issue Oct 13, 2024 · 1 comment

Comments

@Ultraman6
Copy link

Hi, I ran through your program. Why is it that only the lora weights are saved at the end of the fine tuning, the promt encoder and mask decoder downstream of sam are HOT parameters, don't they need to be saved?
In your inference_eval.py also only the lora parameters are loaded, not considering the parameters downstream of sam at all!

@MathieuNlp
Copy link
Owner

Hi,

For the training checkpoints, you can read in the readme "limitation" part. It is explained why I chose to save at the end.
For the loading of weights, I am freezing all weights of SAM (loading a SAM model is enough). Therefore I only need to load the LoRA checkpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants