You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm testing the Python API. I wanted to check if I could get 1.0 AP when the prediction is strictly equal to the ground truth, wich seems obvious. However, I get 0.73 AP@0.5 and other weard stuff. I'm using a custom annotation file
Am I doing something wrong ? Thanks in advance,
Here is my code :
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import json
import numpy as np
d = json.load(open("annotations.json"))
# construct a prediction array from ground truth, with (id_image, x1, y1, w, h, confidence, cls ) format
pred = np.zeros((7,7))
for idx, ann in enumerate(d["annotations"]):
pred[idx,1:5] = ann["bbox"]
pred[idx,5] = 1
print(f"prediction {pred}\n\n ground truth : {d}")
I figured out the problem. When the annotation ids start from 0, it does not return the right metric values!! Just start from 1 and it is fixed for me. #332
Hi,
I'm testing the Python API. I wanted to check if I could get 1.0 AP when the prediction is strictly equal to the ground truth, wich seems obvious. However, I get 0.73 AP@0.5 and other weard stuff. I'm using a custom annotation file
Am I doing something wrong ? Thanks in advance,
Here is my code :
The text was updated successfully, but these errors were encountered: