Welcome to Makeability Lab's repository about fine-tuning RTMDet models! Fine-tuning is a pivotal process in deep learning where a pre-trained model, already trained on a large dataset, is further trained or "fine-tuned" on a smaller, specific dataset. This approach leverages the learned features and patterns from the initial training, making it highly efficient for tasks like image classification, object detection, and more in CV.
In this repo we mainly apply the approach of feature extraction, in which we freeze the base layers of the model, leveraging their learned features, and only train the final layers specific to our tasks.
Kindly note that this repo is valid as of Apr 12th 2024, in any future circumstances where OpenMMLab, owner of MMDetection and RTMDet, changes their implementation, please refer to their official github.
Firstly you will have to setup conda environment, MMDetection toolbox and pytorch. If your GPU supports CUDA, please also install it.
- Install Anaconda.
- Install Pytorch. Choose CUDA version when you have CUDA device.
- Install MMDetection.
It is recommended that you first install Pytorch and then MMDetection otherwise your Pytorch might not be correctly complied with CUDA.
-
Once you installed everything, firstly make three folders inside the mmdetection directory namely
./data
,./checkpoints
and./work_dir
either manually or usingmkdir
in conda. -
The next step is to download pre-trained config and weights files from mmdetection. For example,
mim download mmdet --config rtmdet-ins_l_8xb32-300e_coco --dest ./checkpoints
. This means that this is a pre-trained RTMDet instance segmentation model that has been trained with 8 GPUs, a batch size of 32 and 300 epochs on COCO dataset. You should name your weights file in the same way and you can find all config files for all available models here. -
After downloading the pre-trained model that you would like to work with, run
python test_install.py
to see if it is working correctly. If you can see an image with segmentation masks pops out, then you have installed everything correctly. Otherwise check the error messages and google. -
Move your COCO_MMdetection dataset to
./data
and runpython coco_classcheck.py
to check the classes contained in your data. -
To fine-tune a pre-trained model, you will have to setup a customized config file. Check and run
python config_setup.py
. -
Now, run
python tools/train.py PATH/TO/CONFIG
and let the training process start. If in any circumstances the training is interrupted but the last checkpoint is successfully saved into./work_dir
, you can resume the process from where it stopped by runningpython tools/train.py PATH/TO/CONFIG --resume auto
. Remember to toggle the resume option in your config file toTrue
. -
When training is done, run
python infer_img.py
orpython infer_video.py
to test the fine-tuned model on either a single image or a video.
When training is done, you may want to evalute the model's performance and get several metrics for writing report and paper. For instance segmentation tasks, there are 6 most commonly used metrics namly segmentation mean average precison or segm_mAP
, segmentation mean average precison at 50% IOU (Intersection over Union) threshold or segm_mAP_50
, segmentation mean average precison at 75% IOU threshold or segm_mAP_75
, segmentation mean average precison on small area (less than 32*32 pixels) or segm_mAP_s
, segmentation mean average precison on medium area (greater than 32*32 but less than 96*96 pixels ) or segm_mAP_m
and segmentation mean average precison on large area (greater than 96*96 pixels ) or segm_mAP_l
.
To get these metrics for your model on the specific dataset on which it was trained or fine-tuned, simply run python tools/test.py PATH/TO/CONFIG PATH/TO/WEIGHTS
and the evaluation would be carried out on the test set.
You may note that the metrics obtained by running test.py
may differ from what you get from the summary at the end of the training process. This is correct as well as expected. The metrics obtained at the end of training are generated by evaluating the model on the validation set while the testing metrics are on test set. By definition, a validation set is a dataset of examples used to tune the hyperparameters during the training process while the test set is an independent set used solely to assess the performance of a fully trained model. The concepts of these two sets are widely confused.
You should always report metrics on test set.
MMDetection toolbox provides local visualization backends and saves all training related data into a single JSON file that would be placed in ./work_dir
alongside with trained weights. However, if you want to visualize these data as graphs and track the training process remotely, you can use online visualization platforms such as Weights & Bias. To do this:
-
Install Weights & Bias in your conda env by
pip install wandb
. Note: do not callpip
inside your MMDetection directory to avoid creating an unnecessarywandb.py
and make sure there is no such file in that directory, otherwise an error message would pop out indicating that there is circluar import. -
wandb login
to log into your wandb account (need to get one first). Simply copy & paste your api key into the conda prompt (you might not see the string as the interface won't display your api key) and press enter. -
Modify the
train.py
script by addingimport wandb
andwandb.init( project = "your project name" )
. -
Rememebr to add wandb visualization backend to your config.