Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COCO metrics only evaluated on image_ids with predictions #2504

Open
digital-nomad-cheng opened this issue Nov 8, 2024 · 0 comments
Open
Assignees

Comments

@digital-nomad-cheng
Copy link

digital-nomad-cheng commented Nov 8, 2024

coco_dt = coco_gt.loadRes(predictions=coco_predictions)
image_ids = [ann["image_id"] for ann in coco_predictions]
coco_eval = COCOeval(coco_gt, coco_dt, iouType="bbox")
coco_eval.params.imgIds = image_ids

The pycoco_wrapper only evaluates on images where there is a prediction, and leave out image_ids where there is not prediction.
This is a serious bug if I understand it correctly. The coco metrics should be evaluated on all the images.
Correct way to do this should be below, without filtering image_ids.

    coco_dt = coco_gt.loadRes(predictions=coco_predictions)
    coco_eval = COCOeval(coco_gt, coco_dt, iouType="bbox")
@digital-nomad-cheng digital-nomad-cheng changed the title COCO metrics only evaluated on prediction image_ids COCO metrics only evaluated on image_ids with predictions Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants