RealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs. The questions were created by medical students and a large language model (LLM), while the answers are guideline recommendations provided by the UK's National Institute for Health and Care Excellence (NICE). This repositary contains the code to run experiments using the baseline models, i.e. Contriever, BM25, BERT, PubMedBERT, BioBERT, BioBERT fine-tuned on PubMedQA and SciBERT. The full paper describing the dataset and the experiments has been accepted to the American Medical Informatics Association (AMIA) Annual Symposium and is available here.
Installing python environment
pip install -r requirements
python main.py --model-type bm25 --dataset-type RealMedQA --batch-size 16 --seed 0
--model-type
:str
- BM25:
bm25
- BERT:
bert-base-uncased
- SciBERT:
allenai/scibert_scivocab_uncased
- BioBERT:
dmis-lab/biobert-v1.1
- BioBERT fine-tuned on PubMedQA:
blizrys/biobert-v1.1-finetuned-pubmedqa
- PubMedBERT:
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
- Contriever:
facebook/contriever
- BM25:
--dataset-type
:str
RealMedQA
BioASQ
--batch-size
:int
- Batch size for encoding answers.
--seed
:int
- Seed to initialize random sampler of BIoASQ QA pairs.
THe output is the JSON file metrics.json
in the data
directory with nDCG@k
and MAP@k
for
k recall@k
for k
If you use this codebase, please cite our work using the following reference:
@misc{kell2024realmedqapilotbiomedicalquestion,
title={RealMedQA: A pilot biomedical question answering dataset containing realistic clinical questions},
author={Gregory Kell and Angus Roberts and Serge Umansky and Yuti Khare and Najma Ahmed and Nikhil Patel and Chloe Simela and Jack Coumbe and Julian Rozario and Ryan-Rhys Griffiths and Iain J. Marshall},
year={2024},
eprint={2408.08624},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.08624},
}