Skip to content

Commit

Permalink
Merge pull request #696 from mharradon:patch-1
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 597552766
  • Loading branch information
OptaxDev committed Jan 11, 2024
2 parents bc22961 + 36a8aa3 commit 6543f16
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions optax/_src/wrappers.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ class ApplyIfFiniteState(NamedTuple):
notfinite_count: Number of consecutive gradient updates containing an Inf or
a NaN. This number is reset to 0 whenever a gradient update without an Inf
or a NaN is done.
last_finite: Whether or not the last gradient update contained an Inf of a
last_finite: Whether or not the last gradient update contained an Inf or a
NaN.
total_notfinite: Total number of gradient updates containing an Inf or
a NaN since this optimizer was initialised. This number is never reset.
Expand All @@ -115,15 +115,15 @@ def apply_if_finite(
"""A function that wraps an optimizer to make it robust to a few NaNs or Infs.
The purpose of this function is to prevent any optimization to happen if the
gradients contain NaNs or Infs. That is, when a NaN of Inf is detected in the
gradients contain NaNs or Infs. That is, when a NaN or Inf is detected in the
gradients, the wrapped optimizer ignores that gradient update. If the NaNs or
Infs persist after a given number of updates, the wrapped optimizer gives up
and accepts the update.
Args:
inner: Inner transformation to be wrapped.
max_consecutive_errors: Maximum number of consecutive gradient updates
containing NaNs of Infs that the wrapped optimizer will ignore. After
containing NaNs or Infs that the wrapped optimizer will ignore. After
that many ignored updates, the optimizer will give up and accept.
Returns:
Expand Down

0 comments on commit 6543f16

Please sign in to comment.