Skip to content

YueYANG1996/MedRAG

 
 

Repository files navigation

MedRAG Toolkit

MedRAG a systematic toolkit for Retrieval-Augmented Generation (RAG) on medical question answering (QA). MedRAG is used to implement various RAG systems for the benchmark study on our MIRAGE (Medical Information Retrieval-Augmented Generation Evaluation).

Preprint Homepage Corpus

News

  • (04/26/2024) Add supports for Google/gemini-1.0-pro and meta-llama/Meta-Llama-3-70B-Instruct.
  • (02/26/2024) The code has been updated. It supports all corpora and retrievers introduced in our paper now.

Table of Contents

Introduction

The following figure shows that MedRAG consists of three major components: Corpora, Retrievers, and LLMs.

Alt text

Corpus

For corpora used in MedRAG, we collect raw data from four different sources, including the commonly used PubMed for all biomedical abstracts, StatPearls for clinical decision support, medical Textbooks for domain-specific knowledge, and Wikipedia for general knowledge. We also provide a MedCorp corpus by combining all four corpora, facilitating cross-source retrieval. Each corpus is chunked into short snippets.

Corpus #Doc. #Snippets Avg. L Domain
PubMed 23.9M 23.9M 296 Biomed.
StatPearls 9.3k 301.2k 119 Clinics
Textbooks 18 125.8k 182 Medicine
Wikipedia 6.5M 29.9M 162 General
MedCorp 30.4M 54.2M 221 Mixed

(#Doc.: numbers of raw documents; #Snippets: numbers of snippets (chunks); Avg. L: average length of snippets.)

Retriever

For the retrieval algorithms, we only select some representative ones in MedRAG, including a lexical retriever (BM25), a general-domain semantic retriever (Contriever), a scientific-domain retriever (SPECTER), and a biomedical-domain retriever (MedCPT).

Retriever Type Size Metric Domain
BM25 Lexical -- BM25 General
Contriever Semantic 110M IP General
SPECTER Semantic 110M L2 Scientific
MedCPT Semantic 109M IP Biomed.

(IP: inner product; L2: L2 norm)

LLM

We select several frequently used LLMs in MedRAG, including the commercial GPT-3.5 and GPT-4, the open-source Mixtral and Llama2, and the biomedical domain-specific MEDITRON and PMC-LLaMA. Temperatures are set to 0 for deterministic outputs.

LLM Size Context Open Domain
GPT-4 N/A 32,768 No General
GPT-3.5 N/A 16,384 No General
Mixtral 8×7B 32,768 Yes General
Llama2 70B 4,096 Yes General
MEDITRON 70B 4,096 Yes Biomed.
PMC-LLaMA 13B 2,048 Yes Biomed.

(Context: context length of the LLM; Open: Open-source.)

Requirements

  • First, install PyTorch suitable for your system's CUDA version by following the official instructions (2.1.1+cu121 in our case).

  • Then, install the remaining requirements using: pip install -r requirements.txt,

  • For GPT-3.5/GPT-4, an OpenAI API key is needed. Replace the placeholder with your key in src/config.py.

  • Git-lfs is required to download and load corpora for the first time.

  • Java is requried for using BM25.

Usage

from src.medrag import MedRAG

question = "A lesion causing compression of the facial nerve at the stylomastoid foramen will cause ipsilateral"
options = {
    "A": "paralysis of the facial muscles.",
    "B": "paralysis of the facial muscles and loss of taste.",
    "C": "paralysis of the facial muscles, loss of taste and lacrimation.",
    "D": "paralysis of the facial muscles, loss of taste, lacrimation and decreased salivation."
}

## CoT Prompting
cot = MedRAG(llm_name="OpenAI/gpt-3.5-turbo-16k", rag=False)
answer, _, _ = cot.answer(question=question, options=options)

## MedRAG
medrag = MedRAG(llm_name="OpenAI/gpt-3.5-turbo-16k", rag=True, retriever_name="MedCPT", corpus_name="Textbooks")
answer, snippets, scores = medrag.answer(question=question, options=options, k=32) # scores are given by the retrieval system

Compatibility

We've tested the following LLMs on our MedRAG toolkit:

  • OpenAI/gpt-4
  • OpenAI/gpt-3.5-turbo
  • Google/gemini-1.0-pro
  • meta-llama/Meta-Llama-3-70B-Instruct
  • meta-llama/Llama-2-70b-chat-hf
  • mistralai/Mixtral-8x7B-Instruct-v0.1
  • epfl-llm/meditron-70b
  • axiong/PMC_LLaMA_13B

Citation

@article{xiong2024benchmarking,
  title={Benchmarking retrieval-augmented generation for medicine},
  author={Xiong, Guangzhi and Jin, Qiao and Lu, Zhiyong and Zhang, Aidong},
  journal={arXiv preprint arXiv:2402.13178},
  year={2024}
}

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.9%
  • Jinja 3.6%
  • Shell 0.5%