Mask R-CNN for players segmentation in a football game
either follow that or the following steps
-
Install dependencies
pip3 install -r requirements.txt
-
Clone this repository
-
Run setup from the repository root directory
python3 setup.py install
-
Download pre-trained COCO weights (mask_rcnn_coco.h5) from the releases page. OR ours that are already fitted for one class detection
-
(Optional) To train or test on MS COCO install
pycocotools
from one of these repos. They are forks of the original pycocotools with fixes for Python3 and Windows (the official repo doesn't seem to be active anymore).- Linux: https://github.com/waleedka/coco
- Windows: https://github.com/philferriere/cocoapi.
You must have the Visual C++ 2015 build tools on your path (see the repo for additional details)
To train or test on MS COCO, you'll also need:
- MS COCO Dataset
- Download the 5K minival and the 35K validation-minus-minival subsets.
-
(demo.ipynb, demoBallon.ipynb, demoFoot.ipynb) Is the easiest way to start. It shows an example of using a model pre-trained on MS COCO to segment objects in your own images. It includes code to run object detection and instance segmentation on arbitrary images.
-
(model.py, utils.py, config.py): These files contain the main Mask RCNN implementation.
-
processvideo: a simple folder to test your weights on a given video
We're providing pre-trained weights for MS COCO to make it easier to start. You can
use those weights as a starting point to train your own variation on the network.
Training and evaluation code is in samples/coco/coco.py
. You can import this
module in Jupyter notebook (see the provided notebooks for examples) or you
can run it directly from the command line as such:
# Train a new model starting from pre-trained COCO weights
python3 samples/coco/coco.py train --dataset=/path/to/coco/ --model=coco
# Train a new model starting from ImageNet weights
python3 samples/coco/coco.py train --dataset=/path/to/coco/ --model=imagenet
# Continue training a model that you had trained earlier
python3 samples/coco/coco.py train --dataset=/path/to/coco/ --model=/path/to/weights.h5
# Continue training the last model you trained. This will find
# the last trained weights in the model directory.
python3 samples/coco/coco.py train --dataset=/path/to/coco/ --model=last
You can also run the COCO evaluation code with:
# Run COCO evaluation on the last trained model
python3 samples/coco/coco.py evaluate --dataset=/path/to/coco/ --model=last
The training schedule, learning rate, and other parameters should be set in samples/coco/coco.py
.
Start by reading this blog post about the balloon color splash sample. It covers the process starting from annotating images to training to using the results in a sample application.
In summary, to train the model on your own dataset you'll need to extend two classes:
Config
This class contains the default configuration. Subclass it and modify the attributes you need to change.
Dataset
This class provides a consistent way to work with any dataset.
It allows you to use new datasets for training without having to change
the code of the model. It also supports loading multiple datasets at the
same time, which is useful if the objects you want to detect are not
all available in one dataset.
See examples in samples/balloonfoot
, samples/coco
, and samples/footplayers.py
.
Use via to annotate/label the images and follow the template from that implementation that takes in consideration two different classes: stick to the JSON template and simply change your classes names, you should be fine. (git of the dual classes project)
how to train just final layers using a previous network trained on coco and a new db with annotated players:
In model.py you have that train() fonction that trains different layers depending on what params you give. You just have to edit for instance the train call in football.py with the regular expression that fits the amount of layers you want to train. Also a big help for transfer learning comes from the tutorial linked previously and should be sufficient
You simply have to rename your backbone and Arch parameters to "mobilenet224v1" the code is already included in the repo and comes from that pull request on the original Mask-RCNN git from Matterport
Multiple possibilities: change some parameters in config.py examples size of the FC layer, size of the images, amount of ROI... Otherwise you go into model.py at the build of the MaskRcnn and you directly change the layers to reduce them, for instance we were working with resnet101 - 50 and mobilenet you have plenty of different backbone architectures that could be tried BUT you need to make sure the output of the first fonction called such as resnet_graph or even mobilenet_graph is accordinatly sized to continue the MaskRCNN creation.
You can use tensorboard to see how your training is going and to know if you can early stop the trainning, you obviously have to create a validation dataset aswell so you can see how good the model fits.