Skip to content

Generative AI Extension for the OHIF Viewer Radiodlogy

License

Notifications You must be signed in to change notification settings

cmudig/GenAIxRad-Viewer

This branch is 12 commits ahead of TomWartm/Viewers:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

4ebb983 · Dec 26, 2024
May 27, 2024
Sep 19, 2023
Oct 25, 2024
May 21, 2024
Sep 1, 2023
Nov 10, 2023
Dec 13, 2023
May 27, 2024
Oct 22, 2024
Oct 22, 2024
Aug 5, 2024
Dec 26, 2024
May 14, 2024
Jun 5, 2024
Sep 4, 2019
Sep 12, 2023
Sep 5, 2023
Apr 6, 2022
Sep 1, 2023
Oct 2, 2024
Nov 16, 2022
Jul 25, 2024
Aug 15, 2023
Sep 26, 2023
Apr 6, 2022
Sep 11, 2023
Jun 8, 2024
Dec 10, 2019
Dec 10, 2019
May 12, 2023
Sep 26, 2023
Oct 2, 2024
Aug 13, 2019
Dec 26, 2024
Jul 17, 2024
Aug 13, 2019
May 29, 2024
Jun 8, 2024
Aug 29, 2024
Aug 13, 2019
Oct 22, 2024
Jun 6, 2023
May 15, 2023
Sep 1, 2023
Jun 8, 2024
Jul 22, 2020
Jul 22, 2020
May 21, 2024
Oct 22, 2024
May 21, 2024
Sep 1, 2023
Jul 27, 2022
Sep 29, 2023
Sep 29, 2023
Nov 17, 2022
May 27, 2024
Jun 8, 2024
May 16, 2024
Jun 8, 2024
Aug 26, 2024
Oct 22, 2024

Repository files navigation

A Generative AI Extension for the OHIF Viewer

This Repository contains the OHIF Viewer with an Generative AI extension, that enables the user to input a text and ganerates a CT scan of the chest. The OHIF Viewer is a medical image viewer provided by the Open Health Imaging Foundation (OHIF). This extension requires an backend server to run the generative AI model (MedSyn) to convert text input into 3D CT scans.


Generative AI extension.

Screenshot of Generative AI extension. Left: Findings and Impressions of original CT scan. Right: Enter prompt to generate CT and server Status, below already generated images.

Developing

Requirements

  • Yarn 1.17.3+
  • Node 18+
  • Docker
  • Yarn Workspaces should be enabled on your machine:
    • yarn config set workspaces-experimental true
  • To make inference with MedSyn model GPU with 40GB RAM required

Getting Started

Run Application

  1. Clone this repository
    • git clone https://github.com/TomWartm/Viewers.git
  2. Navigate to the cloned project's directory
  3. yarn install to restore dependencies and link projects
  4. Start backend server yarn orthanc:up
  5. Start the Application with Orthanc as backend yarn dev:orthanc (in a new terminal) 5a. You can also just run yarn dev and it will work
  6. You may need to update the frontend URL path to blackened located in this file: extensions/text-input-extension/src/GenerativeAIComponent.tsx
  7. If you want to deploy the frontend, then you need to

Run Backend

  1. Clone the backend repository (on a machine with large GPU RAM)
    • git clone https://github.com/TomWartm/MedsynBackend
  2. Navigate to the cloned project's directory
  3. Install required python packages conda env create --file environment.yml
  4. Activate environment conda activate medsyn-3-8
  5. Navigate to src folder
  6. Run flask server python app.py

Notes about Backend

  • Always shelve the VM when you are done using it
  • Don't release the Public IP Address
  • Open the web desktop to activate conda environment in the terminal (only need to start from step 4)

Add dummy Data

Add NIfTI files to the folder data/nifti (some are available on our google drive) and use the notebook in backend/nifti_to_orthan.ipynb to converti files into DICOM and upload to the Orthanc server.

Pinging the Model API on PSC

(this may be helpful: https://www.psc.edu/resources/bridges-2/user-guide/)

  • RunMedSyn.ipnyb in this folder: https://drive.google.com/drive/u/0/folders/1BW8n9D_nBhsLVCdVsN52JaO72Ky23AdI
  • You need the whole folder on PSC in your home directory
  • You need to set up the conda environment:
    • module load anaconda3
    • conda activate # source /opt/packages/anaconda3/etc/profile.d/conda.sh
    • go to MedSyn folder, run conda env create --file environment.yml
    • conda activate medsyn-3-8
    • if you didn't do this, then conda install ipykernel and python3 -m ipykernel install --user --name medsyn-3-8 --display-name "PYTHON-medsyn-3-8"
    • When you launch a jupyter notebook, you have to set Extra Slurm Args to --gres=gpu:v100-32:4
    • partition it to GPU-shared
    • after you generate an image from there, you can if you want run nifti-to-orthanic.ipynb in the GenAIViewer Repo to view image in the UI OR you can use ITK-SNAP program instead

Project

The OHIF Medical Image Viewing Platform is maintained as a monorepo. This means that this repository, instead of containing a single project, contains many projects. If you explore our project structure, you'll see the following:

.
├── extensions               #
│   ├── _example             # Skeleton of example extension
│   ├── default              # basic set of useful functionalities (datasources, panels, etc)
│   ├── cornerstone       # image rendering and tools w/ Cornerstone3D
│   ├── cornerstone-dicom-sr # DICOM Structured Report rendering and export
│   ├── cornerstone-dicom-sr # DICOM Structured Report rendering and export
│   ├── cornerstone-dicom-seg # DICOM Segmentation rendering and export
│   ├── cornerstone-dicom-rt # DICOM RTSTRUCT rendering
│   ├── cornerstone-microscopy # Whole Slide Microscopy rendering
│   ├── dicom-pdf # PDF rendering
│   ├── dicom-video # DICOM RESTful Services
│   ├── measurement-tracking # Longitudinal measurement tracking
|   ├── text-input-extension # generative ML model promting
│   ├── tmtv # Total Metabolic Tumor Volume (TMTV) calculation
|

│
├── modes                    #
│   ├── _example             # Skeleton of example mode
│   ├── generative-ai       # generative ML model promting
│   ├── basic-dev-mode       # Basic development mode
│   ├── longitudinal         # Longitudinal mode (measurement tracking)
│   ├── tmtv       # Total Metabolic Tumor Volume (TMTV) calculation mode
│   └── microscopy          # Whole Slide Microscopy mode
│
├── platform                 #
│   ├── core                 # Business Logic
│   ├── i18n                 # Internationalization Support
│   ├── ui                   # React component library
│   ├── docs                 # Documentation
│   └── viewer               # Connects platform and extension projects
│
├── ...                      # misc. shared configuration
├── lerna.json               # MonoRepo (Lerna) settings
├── package.json             # Shared devDependencies and commands
└── README.md                # This file

Howto

Manually load images

To manually load images into the Tool you can drag-and-drop with the Upload feature on the study overview page, opload directly to Orthanc server on its interface (http://localhost:8042/app/explorer.html) or programmatically with python (check backend/nifti_to_orthanc.ipynb)

Backend

Images are stored on a the Orthanc server you can open up the Interface running on http://localhost:8042/app/explorer.html.

License

MIT License

About

Generative AI Extension for the OHIF Viewer Radiodlogy

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 49.7%
  • JavaScript 26.7%
  • Jupyter Notebook 19.6%
  • MDX 1.9%
  • CSS 1.0%
  • HTML 0.6%
  • Other 0.5%