Skip to content

Latest commit

 

History

History
32 lines (19 loc) · 2.59 KB

README.md

File metadata and controls

32 lines (19 loc) · 2.59 KB

Welcome to Hands on QA framework


About this project


We explored recent studies in Question Answering System. Then tried out 3 different models for the sake of learning. Our steps were:

  1. Firstly, we studied recent works on QA. More precisely, we studied Zylich et al.'s Exploring Automated Question Answering Methods for Teaching Assistancewhich is published in AIE conf. in 2020 Link. The summary of the paper is uploaded here.

  2. After that, we studied about BERT, what is the input-output format of it and how it works in case of QA. Then, tried out pretrained & fine-tuned BERT model. This BERT model is fine-tuned using SQuAD v1.1 dataset. Then viewed our output.

  1. Next, we studied about DistilBERT, which is a distiled version of BERT. It is more smaller, faster, cheaper and lighter than BERT. It doesn't have token ids like BERT, as a result is gives 70% more faster output than BERT. Gives almost accurate result like BERT. Our used model is pretrained and fined-tuned with same dataset as BERT was. Then we compared the output with BERT and verify the output.
  1. Lastly, we wanted to use a DistilBERT pretrained model and fine-tuned it with a custom dataset SQuAD v2.0 trained dataset. Then tested our pretrained model with SQuAD v2.0 dev dataset and checked accuracy of the model.

Contributors