Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TIDE outputs vs. what pycocotools outputs #14

Closed
kdk2612 opened this issue Oct 13, 2020 · 5 comments
Closed

TIDE outputs vs. what pycocotools outputs #14

kdk2612 opened this issue Oct 13, 2020 · 5 comments

Comments

@kdk2612
Copy link

kdk2612 commented Oct 13, 2020

Hi, first things first.. This lib is amazing and helped a lot to understand the errors related to the detections.
I was doing using this project for the initial evaluation but since there was no support for Recall, I decided to use the pycocotools for evaluation as well.

Now, during the comparison I got different results for the AP[0.50-0.95]
pycocotools gives- 0.460
TIDE gives - 41.33

Also,
pycocotools gives - AP @ 50 = 0.804
TIDE gives - AP @ 50 = 70.93 (extracted from the summary table)

I was wondering where the difference comes from, exploring how the TP, FP, FN are calculated for now.

@dbolya
Copy link
Owner

dbolya commented Oct 14, 2020

Hi, and thanks for making this issue. TIDE should have 100% parity with pycocotools when it comes to mAP calculation, so this difference shouldn't be happening.

Is this on COCO or a custom COCO-style dataset? I'm thinking if it's a custom dataset, there may be some edge case that TIDE and pycocotools are handling differently which results in the different mAP.

@kdk2612
Copy link
Author

kdk2612 commented Oct 14, 2020

Hi, This is on custom COCO-style dataset. I am converting the data into the COCO format to run the evaluation. I identified the issue anyways. For using the pycocotools I had set the "useCats" to 1, which ignores the category labels. Seems to be working the same after setting it to 0.

@dbolya
Copy link
Owner

dbolya commented Oct 14, 2020

Nice! Then I'll close this issue.

@dbolya dbolya closed this as completed Oct 14, 2020
@BartvanMarrewijk
Copy link

Maybe also good to mention that I had the same problem, but my solution was different. If your custom coco format annotation id starts with 0, then this annotation will not be taken into account in the coco format!!! The easiest solution is just make sure that the annotation id starts with 1. For more information see following git issue #cocodataset/cocoapi#507

@willyd
Copy link

willyd commented Feb 24, 2022

I had a similar issue where both my groundtruth and predictions where in the COCO annotations format so I loaded both with tidecv.datasets.COCO. This ignores the score field in the detections which have to be loaded using tidecv.datasets.COCOResult.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants