-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-Tuned BERT-base on Squad v1. #47
Comments
Thanks for the details. |
@thomwolf On SQuAD v1.1, BERT (single) scored 85.083 EM and 91.835 F1 as reported in their paper but when I fine-tuned BERT using |
add mat-coqa runner with multitask + adversarial training support (co…
add patch bit
* Update trainer and model flows to accommodate sparseml Disable FP16 on QAT start (huggingface#12) * Override LRScheduler when using LRModifiers * Disable FP16 on QAT start * keep wrapped scaler object for training after disabling Using QATMatMul in DistilBERT model class (huggingface#41) Removed double quantization of output of context layer. (huggingface#45) Fix DataParallel validation forward signatures (huggingface#47) * Fix: DataParallel validation forward signatures * Update: generalize forward_fn selection Best model after epoch (huggingface#46) fix sclaer check for non fp16 mode in trainer (huggingface#38) Mobilebert QAT (huggingface#55) * Remove duplicate quantization of vocabulary. enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9) * Utils and auxillary changes update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54) add flag to signal NM integration is active (huggingface#32) Add recipe_name to file names * Fix errors introduced in manual cherry-pick upgrade Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
* Update model list * Update README.md --------- Co-authored-by: Qubitium-modelcloud <qubitium@modelcloud.ai>
…ngface#47) (huggingface#49) * fix cannot pickle 'module' object for 8 bit * remove unused import * remove print * check with tuple * revert to len check * add test for 8bit * set same QuantizeConfig * check if it's 4 bit * fix grammar * remove params * it's not a list * set gptqmodel_cuda back * check is tuple * format * set desc_act=True * set desc_act=True * format * format * Refractor fix * desc_act=True --------- Co-authored-by: Qubitium <Qubitium@modelcloud.ai>
I have fine-tuned the TF model on SQuAD v1 and I've made the weights available at: https://s3.eu-west-2.amazonaws.com/nlpfiles/squad_bert_base.tgz
I get 88.5 FM using these weights on SQuAD dev. (If I recall correctly I get roughly 82 EM).
I think it may be beneficial to have these weights here, so that people could play with SQuAD and BERT without the need of fine-tuning, which requires a decent enough setup. Let me know what you think!
The text was updated successfully, but these errors were encountered: