We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transformers
@sgugger
examples
optimizer = Adafactor( model.parameters(), relative_step=True, warmup_init=True, ) lr_scheduler = AdafactorSchedule(optimizer)
When using an AdafactorSchedule I can't use the Trainer class to save a checkpoint.
AdafactorSchedule
Trainer
It breaks here since the learning rate attached to the TrainerState is given by a tensor and tensors are not JSON serializable.
TrainerState
I dropped a breakpoint at this line and took a look at my TrainerState:
In [4]: self.log_history[0]['learning_rate'] Out[4]: tensor(0.0001, device='cuda:0')
Expected behavior is that the learning rate attached to the log history would be given by a float and would therefore be JSON serializable.
The text was updated successfully, but these errors were encountered:
.item()
Successfully merging a pull request may close this issue.
System Info
transformers
version: 4.21.1Who can help?
@sgugger
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
When using an
AdafactorSchedule
I can't use theTrainer
class to save a checkpoint.It breaks here since the learning rate attached to the
TrainerState
is given by a tensor and tensors are not JSON serializable.I dropped a breakpoint at this line and took a look at my
TrainerState
:Expected behavior is that the learning rate attached to the log history would be given by a float and would therefore be JSON serializable.
The text was updated successfully, but these errors were encountered: