Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss starts to increase during BERT model training #11

Open
saparina opened this issue Jul 29, 2020 · 2 comments
Open

Loss starts to increase during BERT model training #11

saparina opened this issue Jul 29, 2020 · 2 comments

Comments

@saparina
Copy link

Hi, I'm trying to reproduce your results with the BERT model. After ~14000 training steps, loss started to increase. I tried rerun, but it didn't help me. Have you faced this problem? This situation looks similar to #3, #7.

Log:

[2020-07-28T15:25:54] Logging to logdir/bert_run/bs=6,lr=7.4e-04,bert_lr=3.0e-06,end_lr=0e0,att=1
...
[2020-07-29T13:59:21] Step 14100 stats, train: loss = 1.1323808431625366
[2020-07-29T13:59:27] Step 14100 stats, val: loss = 3.3228100538253784
...
[2020-07-29T14:08:51] Step 14200 stats, train: loss = 0.9168887138366699
[2020-07-29T14:08:57] Step 14200 stats, val: loss = 3.5443124771118164
...
[2020-07-29T14:18:30] Step 14300 stats, train: loss = 2.303567111492157
[2020-07-29T14:18:37] Step 14300 stats, val: loss = 4.652050733566284
...
[2020-07-29T14:28:01] Step 14400 stats, train: loss = 95.80101776123047
[2020-07-29T14:28:08] Step 14400 stats, val: loss = 112.55300903320312
@senthurRam33
Copy link

We also faced the same issue in our training. Our best guess is that it occurred because of gradient explosion. Even when you try to run the model from the previous checkpoint, you will get the same issue sometime down the line. Right now the only choice is to start training from scratch. If you solved the issue with some other techniques please do share.

@saparina
Copy link
Author

@senthurRam33 I also think the problem is somewhere in gradients. I changed the loss to 'label_smooth' (see #10) and got more stable training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants