You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The torchmetrics.detection.mean_ap.MeanAveragePrecision metric returns a dictionary with standard COCO object detection quality metrics. However, it doesn't provide a way to retrieve non-aggregated metrics computed internally, like Precisions, Recalls, IoU scores or confusion matrix counters (TP/FP/TN/FN).
This feature request proposes to extend the metric to return those values in addition to aggregated scores.
Motivation
Our team works on building object detection models. In addition to computing mAP and mAR scores, we would also like to inspect PR curves and IoU scores, plus derive other metrics (like F1-score or mIoU) from them. The MeanAveragePrecision metric already computes these values internally, but doesn't provide a way to access them.
Pitch
Extend __init__ parameters of the MeanAveragePrecision class that to allow users to request additional reported metrics:
Precision
Recall
IoU
Confusion matrices
Currently, we "patched" the metric to access precisions and recalls tensors, plus updated the _compute_iou method via inheritance to collect the computed IoU scores. (We did it for the older implementation that now lives in _mean_ap.py and doesn't use the code from pycocoeval.py)
This allows us to access Precision/Recall and, for example, use them to plot PR-curves. We also created a test asserting that both metrics return the same values for aggregated scores. Therefore, our patch didn't break the existing aggregated scores, but only allowed to access internal variables.
Note that we also changed the shape of the precision tensor to return all the scores, and not the highest one only as before. (See the NEW shape comment above.)
Alternatives
We can compute Precision/Recall/IoU/confmat metrics separately, but it would require recomputing the same values multiple times and reimplementing the logic that is already available.
Additional context
If there is a way to compute the metrics we need without changing the MeanAveragePrecision metric's code, please let us know! Currently, we don't see any simple alternative to achieve that without reimplementing large parts of already available functionality, or doing redundant computations with mulitple metric callbacks.
The text was updated successfully, but these errors were encountered:
Hi @devforfu, thanks for raising this issue.
I created PR #1983 that should solve this issue. It will give out precision, recall and ious as these are automatically computed by pycoco. It does not currently also includes the confusion matrix.
🚀 Feature
The
torchmetrics.detection.mean_ap.MeanAveragePrecision
metric returns a dictionary with standard COCO object detection quality metrics. However, it doesn't provide a way to retrieve non-aggregated metrics computed internally, like Precisions, Recalls, IoU scores or confusion matrix counters (TP/FP/TN/FN).This feature request proposes to extend the metric to return those values in addition to aggregated scores.
Motivation
Our team works on building object detection models. In addition to computing mAP and mAR scores, we would also like to inspect PR curves and IoU scores, plus derive other metrics (like F1-score or mIoU) from them. The
MeanAveragePrecision
metric already computes these values internally, but doesn't provide a way to access them.Pitch
Extend
__init__
parameters of theMeanAveragePrecision
class that to allow users to request additional reported metrics:Currently, we "patched" the metric to access
precisions
andrecalls
tensors, plus updated the_compute_iou
method via inheritance to collect the computed IoU scores. (We did it for the older implementation that now lives in_mean_ap.py
and doesn't use the code frompycocoeval.py
)Here is a diff file showing the changes we made.
This allows us to access Precision/Recall and, for example, use them to plot PR-curves. We also created a test asserting that both metrics return the same values for aggregated scores. Therefore, our patch didn't break the existing aggregated scores, but only allowed to access internal variables.
Note that we also changed the shape of the precision tensor to return all the scores, and not the highest one only as before. (See the
NEW
shape comment above.)Alternatives
We can compute Precision/Recall/IoU/confmat metrics separately, but it would require recomputing the same values multiple times and reimplementing the logic that is already available.
Additional context
If there is a way to compute the metrics we need without changing the
MeanAveragePrecision
metric's code, please let us know! Currently, we don't see any simple alternative to achieve that without reimplementing large parts of already available functionality, or doing redundant computations with mulitple metric callbacks.The text was updated successfully, but these errors were encountered: