You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to easily access the true labels (y_true) and predicted labels (y_pred) from the model evaluation pipeline?
This would be helpful for plotting confusion matrices after training or cross-validation.
Best,
The text was updated successfully, but these errors were encountered:
Apologies for the late reply. In moabb currently the philosophy of only relying on a single metric, to avoid some types of cherry picking of the best metric. It's a philosophy I don't 100% agree with, and with the refactoring of the evaluation function, I hope we can move on it.
Since moabb is a volunteer project, where people work when they have free time, it may take a while for this to happen.
My suggestion for you now is that you directly modify the evaluation code to get what you want.
Hey there,
Is there a way to easily access the true labels (y_true) and predicted labels (y_pred) from the model evaluation pipeline?
This would be helpful for plotting confusion matrices after training or cross-validation.
Best,
The text was updated successfully, but these errors were encountered: