Skip to content

shinning0821/Point-Supervised-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point-supervised segmentation of microscopy images and volumes via objectness regularization

1. Introduction

This project contains work of point-supervised segmentation of microscopy images and volumes via objectness regularization and re-implementations of PseudoEdgeNet(paper) and LCFCN(paper|code). The LCFCN part contains most of the original code while the PseudoEdgeNet was implemented according to the paper as we weren't able to find existing code from the project.

2. Usage

2.1 Environment setup

To pre-setup the environment, we recommand to use

conda env create -f environment.yml

and

pip install -r requirements.txt

This command installs pydicom and the Haven library which helps in managing the experiments.

2.2 Prerequest

A stain norm process need to be done in the preprocessing step before generating objectness map. git clone https://github.com/wanghao14/Stain_Normalization.git

2.3 Download Dataset

You can download the BBBC, TNBC or MoNuSegTraingData to the directory. It contains the splitted index for each validation fold and the dataset.

2.4 Train and cross-validation

2.4.1 Preprocessing

Before training, the objectness map for each image should be generated first. Make sure the images ,point annotations and the groundtruth (for validation) saved in the path as below:

Dataset
└── Train
    └── Images
    └── Points
    └── GroundTruth
└── Validate
    └── ...
└── Test
    └── ...

Then run generate_obj.py to generate objectness maps.

2.4.2 Example

Train and validate the model with BBBC dataset

python trainval.py -d BBBC -e exp_config_ponet.json -r 1

Train and validate the model with TNBC dataset

python trainval.py -d TNBC -e exp_config_ponet.json -r 1

Train and validate the pseudoedgenet model with MoNuSeg dataset

python trainval.py -d MoNuSegTrainingData -e exp_config_ponet.json -r 1

Both trainval.py and trainval.ipynb can be used to train and validate the model. In trainval.ipynb, you can get a brief introduction of the setup of the dataset.

2.4.3 Module location

The models are defined in src/models. The dataset is defined in src/datasets/__init__.py. The loss function is defined in lcfcn/lcfcn_loss.py. The function compute_weighted_crossentropy is used in the PseudoEdgeNet model. The compute_lcfcn_loss is used in LCFCN model.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published