Full train/inference/submission pipeline adapted to the Data Science Bowl competition from https://github.com/matterport/Mask_RCNN. Kudos to @matterport, @waleedka and others for the code. It is well written, but is also somewhat opinionated, which makes it harder to guess what's going on under the hood, which is the reason for my fork to exist.
I did almost no changes to the original code, except for:
- Everything custom in
bowl_config.py
. VALIDATION_STEPS
andSTEPS_PER_EPOCH
are now forced to depend on the dataset size, hardcoded.multiprocessing=False
, hardcoded.
- First, you have to download the train masks. Thanks @lopuhin for bringing all the fixes to one place. You might want to do it outside of this repo to be able to pull changes later and use symlinks:
git clone https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes ../kaggle-dsbowl-2018-dataset-fixes
ln -s stage1_train ../kaggle-dsbowl-2018-dataset-fixes/stage1_train
- Download the rest of the official dataset and unzip it to the repo:
unzip ~/Downloads/stage1_test.zip .
unzip ~/Downloads/stage1_train_labels.csv.zip .
unzip ~/Downloads/stage1_sample_submission.csv.zip .
-
Install
pycocotools
and COCO pretrained weights (mask_rcnn_coco.h5
). General idea is described here. Keep in mind, to install pycocotools properly, it's better to runmake install
instead ofmake
. -
For a single GPU training, run:
CUDA_VISIBLE_DEVICES="0" python train.py
- To generate a submission, run:
CUDA_VISIBLE_DEVICES="0" python inference.py
This will create submission.csv
in the repo and overwrite the old one (you're welcome to fix this with a PR).
- Submit! You should get around 0.342 score on LB after 100 epochs.
- Poor man's Exploration Data Analysis -- to get a basic idea about data.
- Test submit for errors -- tries to read the submission and visualizes separate masks.
- Visualize inference -- since there's not too many masks in the test dataset, it's easy to look through all of them in a single place.
- Fix validation. For now, train data is used as a validation set.
- Normalize data.
- Move configuration to
argsparse
for easier hyperparameter search. - Parallelize data loading.