You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The early_stopping callback doesn't seem to use the min_delta option.
I think that this condition
if (
self._best is None
or (self.minimize and value < self._best)
or (not self.minimize and value > self._best)
):
should be
if (
self._best is None
or (self.minimize and value < self._best - self.min_delta)
or (not self.minimize and value > self._best + self.min_delta)
):
Also, something that confused me slightly is that the best weights are only returned if the patience has run out:
if elapsed - self._elapsed_start >= self.patience:
if self.restore_best_weights and self._best_state is not None:
state = self._best_state
stop_iteration = True
So if you have the best performance on the nth iteration before the end of the training, and you have a patience of p > n, then at the end of the training you will not have the best weights returned, as I understand it. I think this can be solved by adding an on_epoch_end method for the callback. Keras EarlyStopping callback doesn't depend on the patience value for returning the best weights: https://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/callbacks.py#L2088.
Finally, thinking of the Keras features and some others that might be nice, what do you think of having some additional options:
initial_patience (e.g., start_from_epoch from Keras, but allowing any period).
delta_type which is either 'absolute' or 'relative', which specifies how the min_delta is treated. Sometimes it is easier to specify a relative change than an absolute one. Alternatively, we could rename min_delta as min_absolute_delta and introduce min_relative_delta.
Let me know your thoughts, and I'll be happy to take a crack at a PR to address some or all of the above.
The text was updated successfully, but these errors were encountered:
The
early_stopping
callback doesn't seem to use themin_delta
option.I think that this condition
should be
Also, something that confused me slightly is that the best weights are only returned if the patience has run out:
So if you have the best performance on the
n
th iteration before the end of the training, and you have a patience ofp > n
, then at the end of the training you will not have the best weights returned, as I understand it. I think this can be solved by adding anon_epoch_end
method for the callback. KerasEarlyStopping
callback doesn't depend on the patience value for returning the best weights: https://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/callbacks.py#L2088.Finally, thinking of the Keras features and some others that might be nice, what do you think of having some additional options:
initial_patience
(e.g.,start_from_epoch
from Keras, but allowing any period).delta_type
which is either'absolute'
or'relative'
, which specifies how themin_delta
is treated. Sometimes it is easier to specify a relative change than an absolute one. Alternatively, we could renamemin_delta
asmin_absolute_delta
and introducemin_relative_delta
.Let me know your thoughts, and I'll be happy to take a crack at a PR to address some or all of the above.
The text was updated successfully, but these errors were encountered: