Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于llava_med stage1模态对齐部分 #7

Open
warmbreeze92 opened this issue Oct 21, 2024 · 0 comments
Open

关于llava_med stage1模态对齐部分 #7

warmbreeze92 opened this issue Oct 21, 2024 · 0 comments

Comments

@warmbreeze92
Copy link

请问您在微调llava模型时是直接使用了stage2的60k指令微调数据吗,有没有进行过stage1的微调工作?我在使用llamafactory在您发布的60k_zh进行lora微调时出现loss始终不收敛的问题,不知道您有什么看法?
此外,不知道您这边有没有保留stage1的模态对齐数据,有的话能不能分享一份,原始数据的pmc文章数量和下载速度太让人绝望了 X﹏X

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant