Skip to content

ICCV 2023, "GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation"

License

Notifications You must be signed in to change notification settings

xmed-lab/GraphEcho

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation

🔨 PostScript

  😄 This project is the pytorch implemention of [paper];

  😆 Our experimental platform is configured with One RTX3090 (cuda>=11.0);

  😊 Currently, this code is avaliable for public dataset CAMUS and EchoNet;

  😃 For codes and accessment that related to dataset CardiacUDA;

      👀 The code is now available at:       ..\datasets\cardiac_uda.py

  😍 For codes and accessment that related to dataset CardiacUDA

      👀 Please follw the link to access our dataset:

💻 Installation

  1. You need to build the relevant environment first, please refer to : requirements.yaml

  2. Install Environment:

    conda env create -f requirements.yaml
    
  • We recommend you to use Anaconda to establish an independent virtual environment, and python > = 3.8.3;

📘 Data Preparation

1. EchoNet & CAMUS

  • This project provides the use case of echocardiogram video segmentation task;

  • The hyper parameters setting of the dataset can be found in the train.py, where you could do the parameters modification;

  • For different tasks, the composition of data sets have significant different, so there is no repetition in this file;

    1.1. Download The CAMUS.

    💬 The detail of CAMUS, please refer to: https://www.creatis.insa-lyon.fr/Challenge/camus/index.html/.

    1. Download & Unzip the dataset.

      The CAMUS dataset is composed as: /testing & /training.

    2. The source code of loading the CAMUS dataset exist in path :

      ..\datasets\camus.py
      and modify the dataset path in
      ..\train_camus_echo.py

      New Version : We have updated the infos.npy in our new released code

    1.2. Download The EchoNet.

    💬 The detail of EchoNet, please refer to: https://echonet.github.io/dynamic/.

    1. Download & Unzip the dataset.

      • The EchoNet dataset is consist of: /Video, FileList.csv & VolumeTracings.csv.
    2. The source code of loading the Echonet dataset exist in path :

      ..\datasets\echo.py
      and modify the dataset path in
      ..\train_camus_echo.py

2. CardiacUDA

  1. Please access the dataset through : XiaoweiXu's Github
  2. Follw the instruction and download.
  3. Finish dataset download and unzip the datasets.
  4. Modify your code in both:
    ..\datasets\cardiac_uda.py
    and modify the infos and dataset path in
    ..\train_cardiac_uda.py
    # The layer of the infos dict should be :
    # dict{
    #     center_name: {
    #                  file: {
    #                        views_images: {image_path},
    #                        views_labels: {label_path},}}}

🐾 Training

  1. In this framework, after the parameters are configured in the file train_cardiac_uda.py and train_camus_echo.py, you only need to use the command:

    python train_cardiac_uda.py

    And

    python train_camus_echo.py
  2. You are also able to start distributed training.

    • Note: Please set the number of graphics cards you need and their id in parameter "enable_GPUs_id".

🚀 Code Reference
🚀 Updates Ver 1.0(PyTorch)
🚀 Project Created by Jiewen Yang : jyangcu@connect.ust.hk

About

ICCV 2023, "GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages