Skip to content

meng-zha/FaceMaskDetection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceMaskDetection

2019-2020 PR project

Data

Download AIZOO data from AIZOO. Then unzip it.

Edit the val/test_00000306.xml because the class name in it is "fask_nask". Correct it to "face_mask".

MODULE

  • create.py: create the train/val/test split.
  • voc2coco.py: convert the voc style data to coco style data.
  • dataset.py: dataset of AIZOO if you want to use your model.
  • mmdetection/: the codes forked from open-mmlab, train and test of retinanet and faster rcnn.
    • tools/: the scripts of train and test
    • configs/: the configs of models
    • demo/image_demo.py: inference on the demo images
    • mmdet/: core code, dataset and models
  • PyTorch-YOLOv3/: the codes forked from eriklindernoren, train and test of yolov3.
    • train.py: train the yolov3 model
    • test.py: test the model
    • detect.py: inference on the demo images
    • utils/: dataset and some other fucntions
    • models.py: darknet models

Faster RCNN and RetinaNet

Clone and install requirements
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab

# install latest pytorch prebuilt with the default prebuilt CUDA version (usually the latest)
conda install -c pytorch pytorch torchvision -y
git clone https://github.com/meng-zha/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
pip install -v -e .
Data preparation
  1. transfer the voc-xml data to coco-style:
python voc2coco.py $AIZOO_PATH/train ./imagesets/train_split.txt $AIZOO_PATH/annotations/instances_train.json 0
python voc2coco.py $AIZOO_PATH/train ./imagesets/val_split.txt $AIZOO_PATH/annotations/instances_val.json 4918
python voc2coco.py $AIZOO_PATH/val ./imagesets/test_split.txt $AIZOO_PATH/annotations/instances_test.json 6120
  1. replace the data root path of yours in configs/base/datasets/coco_detection.py, configs/ssd/ssd512_coco.py.

Retinanet

Train

To train on the AIZOO dataset run:

$ ./tools/train.sh configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py
Test

To test:

$ ./tools/test.sh configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py $CHECKPOINTS_DIR $NUM_OF_MODEL
Inference

To get the result of the test images, please run:

$ python demo/image_demo.py $IMG_PATH configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py $CHECKPOINTS_PATH

Faster RCNN

Your can replace "configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py" by "configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py" to use the command above to train and test the Faster RCNN model.


YOLOv3

Clone and install requirements
$ git clone https://github.com/meng-zha/PyTorch-YOLOv3
$ cd PyTorch-YOLOv3/
$ sudo pip3 install -r requirements.txt
Data preparation

Edit the root_path in custom.data

Train

To train on the AIZOO dataset run:

$ python train.py --model_def config/yolov3-custom.cfg --data_config config/custom.data
Test

To test:

$ python test.py --weights_path $CHECKPOINTS_PATH --model_def config/yolov3-custom.cfg --class_path data/custom/classes.names --data_config config/custom.data  --mode=test

To evaluate:

$ python test.py --weights_path $CHECKPOINTS_PATH --model_def config/yolov3-custom.cfg --class_path data/custom/classes.names --data_config config/custom.data  
Inference

To get the result of the test images, please move the images to /data/samples/. Then run:

$ python detect.py --weights_path $CHECKPOINTS_PATH --model_def config/yolov3-custom.cfg --class_path data/custom/classes.names

Result

Our result can be download from TsinghuCloud.

The best model of faster rcnn is in faster_rcnn_batch_1_best.

The best model of retinanet is in retinanet_baseline.

The best model of yolov3 is in yolov3_best.

Contact

Please don't contact me if you find some bugs. If not, my mailbox will be overflowing.

About

2019-2020 PR project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages