You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have used a pretrained model to fine tuning with lora. Which confused me is that the output_save_file including adapter_config.json and adapter_model.bin is different from the tutorial. Except the adapter_config.json and adapter_model.bin, there are some checkpoints directory. Does these checkpoints file influence the load of lora weights? And my adapter_model.bin is just 1kB, it seems as over small compared to official 19M. What factors may cause this issue, is it normal? How to adjust the parameters to make the size of adapter_model.bin bigger?
The text was updated successfully, but these errors were encountered:
By the way, after finetuning "bigscience/bloom-560m", when I save the finetuned model I do get a plausible model filesize, but saving / loading doesn't work as expected (model does not remember finetuning). #503
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
I have used a pretrained model to fine tuning with lora. Which confused me is that the output_save_file including adapter_config.json and adapter_model.bin is different from the tutorial. Except the adapter_config.json and adapter_model.bin, there are some checkpoints directory. Does these checkpoints file influence the load of lora weights? And my adapter_model.bin is just 1kB, it seems as over small compared to official 19M. What factors may cause this issue, is it normal? How to adjust the parameters to make the size of adapter_model.bin bigger?
The text was updated successfully, but these errors were encountered: