This code was used to get 15th place in Kaggle Google AI Open Images - Object Detection Track competition: https://www.kaggle.com/c/google-ai-open-images-object-detection-track/leaderboard
Repository contains the following:
- Pre-trained models (with ResNet101 and ResNet152 backbones)
- Example code to get predictions with these models for any set of images
- Code to train your own classifier based on Keras-RetinaNet and OID dataset
- Code to expand predictions for full 500 classes
Python 3.5, Keras 2.2, Keras-RetinaNet 0.4.1
There are 3 RetinaNet models based on ResNet50, ResNet101 and ResNet152 for 443 classes (only Level 1).
Backbone | Image Size (px) | Model (training) | Model (inference) | Small validation mAP | Full validation mAP |
---|---|---|---|---|---|
ResNet50 | 728 - 1024 | 533 MB | 178 MB | 0.4621 | 0.3520 |
ResNet101 | 728 - 1024 | 739 MB | 247 MB | 0.5031 | 0.3870 |
ResNet152 | 600 - 800 | 918 MB | 308 MB | 0.5194 | 0.3959 |
- Model (training) - can be used to resume training or can be used as pretrain for your own classifier
- Model (inference) - can be used to get prediction boxes for arbitrary images
Example can be found here: retinanet_inference_example.py
You need to change files_to_process = glob.glob(DATASET_PATH + 'validation_big/*.jpg') to your own set of files. On output you will get "predictions_*.csv" file with boxes.
Having these predictions you can expand it to all 500 classes using code from create_higher_level_predictions_from_level_1_predictions_csv.py
For training you need to download OID dataset (~500 GB images): https://storage.googleapis.com/openimages/web/challenge.html
Next fix paths in a00_utils_and_constants.py
Then to train on OID dataset you need to run python files in following order:
- create_files_for_training_by_levels.py
- retinanet_training_level_1/find_image_parameters.py
then
- retinanet_training_level_1/train_oid_level_1_resnet101.py
or
- retinanet_training_level_1/train_oid_level_1_resnet152.py
If you have predictions from several models, for example for ResNet101 and ResNet152 backbones, then you can ensemble boxes with script:
Proposed method increases the overall performance:
- ResNet101 mAP 0.3776 + ResNet152 mAP 0.3840 gives in result: mAP 0.4220