You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
self.lr_scheduler.step(total_epoch_loss) # note that this loss is synced across all processes
an error occurs.
Traceback (most recent call last):
File "/media/ssd/users/xxx/software/anaconda3/envs/mttr0/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/ssd/users/xxx/projects/MTTR/main.py", line 20, in run
trainer.train()
File "/media/ssd/users/xxx/projects/MTTR/trainer.py", line 175, in train
self.lr_scheduler.step(total_epoch_loss) # note that this loss is synced across all processes
File "/media/ssd/users/xxx/software/anaconda3/envs/mttr0/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 164, in step
self.print_lr(self.verbose, i, lr, epoch)
File "/media/ssd/users/xxx/software/anaconda3/envs/mttr0/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 113, in print_lr
print('Epoch {:5d}: adjusting learning rate'
ValueError: Unknown format code 'd'for object of type'float'
Did you met this bug when training on Refer-YouTube-VOS?
If I change this line to
self.lr_scheduler.step() # note that this loss is synced across all processes
will it influence the result?
The text was updated successfully, but these errors were encountered:
I follow your Environment Installation.
But when executing the following line
MTTR/trainer.py
Line 175 in c383c5b
an error occurs.
Did you met this bug when training on Refer-YouTube-VOS?
If I change this line to
will it influence the result?
The text was updated successfully, but these errors were encountered: