You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Proposed neural networks MTFA/iMTFA are compared in the paper with Siamese Mask-RCNN.
But, in order to compare, nets in those experiments train 1K ImageNet classes, which violates few-shot conventions a little. At the same time, a list of 687 classes (1000 ImageNet w/o COCO) and 771 (1000 ImageNet w/o Pascal-VOC) were uploaded by this link at bethgelab/siamese-mask-rcnn/data/.
Additional experiments with training MTFA starting from (687 / 771)-backbone will reveal the difference between using base classes on the pretrain stage and a fully-pretrained feature maps predictor.
The text was updated successfully, but these errors were encountered:
Proposed neural networks MTFA/iMTFA are compared in the paper with Siamese Mask-RCNN.
But, in order to compare, nets in those experiments train 1K ImageNet classes, which violates few-shot conventions a little. At the same time, a list of 687 classes (1000 ImageNet w/o COCO) and 771 (1000 ImageNet w/o Pascal-VOC) were uploaded by this link at bethgelab/siamese-mask-rcnn/data/.
Moreover, weights of models trained with 687 and 771 classes were released at this link bethgelab/siamese-mask-rcnn/releases.
Additional experiments with training MTFA starting from (687 / 771)-backbone will reveal the difference between using base classes on the pretrain stage and a fully-pretrained feature maps predictor.
The text was updated successfully, but these errors were encountered: