Closed
Description
In #286 I briefly talk about the idea of separating the metrics computation (like the accuracy) from Model
. At the moment, you can keep track of the accuracy in the logs (both history and console logs) easily with the flag show_accuracy=True
in Model.fit()
. Unfortunately this is limited to the accuracy and does not handle any other metrics that could be valuable to the user.
We could have the computation of these metrics outside of Model
and call them with callbacks if one wants to keep track of them during training. It may be valuable for the future but for it could also raise some issues short term
- It would be impossible to log the accuracy (or any other metrics) with the base logger as callbacks do not interact with each other. One solution would be to let the user create her own logger on a different level of verbosity (possibly by inheriting from the current
BaseLogger
). - We would have to think about how to access the training and validation set with callbacks.
Metadata
Metadata
Assignees
Labels
No labels