Skip to content

Bio-Medical EXpert LMM with English and Arabic Language Capabilities

Notifications You must be signed in to change notification settings

mbzuai-oryx/BiMediX2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

BiMediX2 : Bio-Medical EXpert LMM for Diverse Medical Modalities

BiMediX2

*Equally contributing first authors

Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE

Website Paper HuggingFace License

📢 Latest Updates

  • Dec-15-24: Our Model checkpoints are released on HuggingFace link. 🔥
  • Dec-11-24: BiMediX2 Technical Report is released link. 🔥
  • Dec-02-24: BiMediX2 is featured by AI at Meta link. 🔥
  • Sep-24-24: BiMediX2 wins the Meta Llama Impact Innovation Award link. 🔥

👩‍⚕️ Overview

Introducing BiMediX2, the first bilingual (Arabic-English) Bio-Medical Expert Large Multimodal Model (LMM) designed for advanced medical image understanding and applications. Built on the Llama 3.1 architecture, BiMediX2 seamlessly integrates text and visual modalities to enable multilingual interactions, including text-based queries and multi-turn conversations involving medical images. Trained on a diverse bilingual and multimodal healthcare dataset of 1.6M samples, it achieves state-of-the-art performance across various benchmarks. BiMediX2 outperforms recent models in multimodal medical evaluations, delivering over 9% improvement in English and 20% in Arabic evaluations, and excelling in tasks like medical VQA, Report Generation, and Summarization.


🏆 Contributions

Our key contributions are as follows:

  • We introduce the first bilingual medical LMM that achieves state-of-the-art results on VLM evaluation benchmarks across various medical image modalities, while also excelling on medical LLM evaluation benchmarks.
  • We curated a comprehensive Arabic-English multimodal bilingual instruction set named BiMed-V comprising over 1.6M instructions.
  • We introduce the first bilingual GPT-4o-based medical LMM benchmark named BiMed-MBench, consisting of 286 medical queries in English and Arabic across various medical image modalities, fully verified by medical experts.
  • Our BiMediX2 LLM outperforms GPT-4 by more than 8% on the USMLE benchmark and by more than 9% in UPHILL factual accuracy evaluations.
  • Our BiMediX2 LMM achieves state-of-the-art results on BiMed-MBench, with over a 9% improvement in English evaluations and more than a 20% improvement in Arabic evaluations. Furthermore, it excels in medical Visual Question Answering, Report Generation, and Report Summarization tasks.

📊 Model Performance

BiMed-MBench Evaluation

llavamed_spider

Clinical LLM Evaluation

lmeval_sota


֎ BiMediX2 Architecture

Bimedix2_arch


🌟 Examples

en1 Example 2 bi1 multidomain


📜 License & Citation

BiMediX2 is released under the CC-BY-NC-SA 4.0 License. For more details, please refer to the LICENSE file included in our BiMediX repository.

⚠️ Warning! This release, intended for research, is not ready for clinical or commercial use.

Users are urged to employ BiMediX2 responsibly, especially when applying its outputs in real-world medical scenarios. It is imperative to verify the model's advice with qualified healthcare professionals and not rely on it for medical diagnoses or treatment decisions. Despite the overall advancements BiMediX2 shares common challenges with other language models, including hallucinations, toxicity, and stereotypes. BiMediX2's medical diagnoses and recommendations are not infallible.

If you use BiMediX2 in your research, please cite our work as follows:

@misc{mullappilly2024bimedix2biomedicalexpertlmm,
      title={BiMediX2: Bio-Medical EXpert LMM for Diverse Medical Modalities}, 
      author={Sahal Shaji Mullappilly and Mohammed Irfan Kurpath and Sara Pieri and Saeed Yahya Alseiari and Shanavas Cholakkal and Khaled Aldahmani and Fahad Khan and Rao Anwer and Salman Khan and Timothy Baldwin and Hisham Cholakkal},
      year={2024},
      eprint={2412.07769},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07769}, 
}

🙏 Acknowledgements

We are thankful to Meta Llama for releasing their models and LLaVa, Axolotl, LLaVA++ for their open-source code contributions.

We would like to thank Dr. Omair Mohammed, Dr. Mohammed Zidan and Dr. Vishal Thomas Oommen for their contribution in verification of medical responses.

The computations were enabled by resources provided by LUMI hosted by CSC (Finland) and LUMI consortium, and by Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the NSC.

We are grateful to the Meta Llama Impact Innovation Awards for recognizing BiMediX2, as one of the winners awarded in September 2024. This recognition highlights our commitment to advancing AI-driven healthcare solutions.


About

Bio-Medical EXpert LMM with English and Arabic Language Capabilities

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published