You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice to be able to access a percision, recall, and F1 score as a default metric, or support a classification report output.
What is the expected behavior?
Compute percision, recall and F1 scores based on model predictions on the dataset, either during training or after training.
What is motivation or use case for adding/changing the behavior?
Working with imbalanced datasets, other metrics may conceal the true behaviour of the model. F1 scores tend to be more informative.
How should this be implemented in your opinion?
Similar to sklearn's classification report, implemented on testing data after training, or as a tracked metric during training.
Are you willing to work on this yourself?
yes
The text was updated successfully, but these errors were encountered:
Feature request
It would be nice to be able to access a percision, recall, and F1 score as a default metric, or support a classification report output.
What is the expected behavior?
Compute percision, recall and F1 scores based on model predictions on the dataset, either during training or after training.
What is motivation or use case for adding/changing the behavior?
Working with imbalanced datasets, other metrics may conceal the true behaviour of the model. F1 scores tend to be more informative.
How should this be implemented in your opinion?
Similar to sklearn's classification report, implemented on testing data after training, or as a tracked metric during training.
Are you willing to work on this yourself?
yes
The text was updated successfully, but these errors were encountered: