-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to finetune with ANCE? #10
Comments
Thanks for your interest in RetroMAE! We fine-tune the model with hard negatives by changing the argument The cross-encoder example will fine-tune a teacher model, whose prediction scores will be used in distillation. To distill the retriever, you need to generate the teacher_score_files by cross-encoder and add the argument to the training command in the bi_encoder example. |
Thank you for your reply! |
For ANCE, we finetune the Shitao/RetroMAE_MSMARCO model. Please use the hyper-parameters in our script, which we found is better. |
Thank you. I will try again |
Hellow,
Is that the code publiced in code is for finetuning the model by dpr? Where can i find the code for finetuning with ANCE?
And there's another question that confused me. Is that the code in code is used to distill the retriever with a teacher model,the reault corresponding to (0.416/0.709/0.927/0.988)
The text was updated successfully, but these errors were encountered: