You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anomalib currently supports F1 score and AUROC as performance metrics. Several users have requested that other metrics such as ROC (#186) and Brier score (#199) are added.
The computation of some of these metrics can have high computational cost, especially for threshold-independent metrics such as AUROC. When we keep computing all available metrics like we do now, the metric computation stage of the model pipeline could start taking too much time. So, before we add new performance metrics to Anomalib, we need to implement a mechanism that would allow the user to select which performance metrics they want to include in the evaluation.
The text was updated successfully, but these errors were encountered:
Anomalib currently supports F1 score and AUROC as performance metrics. Several users have requested that other metrics such as ROC (#186) and Brier score (#199) are added.
The computation of some of these metrics can have high computational cost, especially for threshold-independent metrics such as AUROC. When we keep computing all available metrics like we do now, the metric computation stage of the model pipeline could start taking too much time. So, before we add new performance metrics to Anomalib, we need to implement a mechanism that would allow the user to select which performance metrics they want to include in the evaluation.
The text was updated successfully, but these errors were encountered: