Skip to content

deepspeed-chat: fix bf16 stage2 accuracy for bloom-560m#772

Merged
tjruwase merged 1 commit intodeepspeedai:masterfrom
mosheisland:9_fix_bloom_stage2_bf16_acc
Oct 17, 2023
Merged

deepspeed-chat: fix bf16 stage2 accuracy for bloom-560m#772
tjruwase merged 1 commit intodeepspeedai:masterfrom
mosheisland:9_fix_bloom_stage2_bf16_acc

Conversation

@mosheisland
Copy link
Contributor

Bloom-560m model has high variance in its last LN layer weight. This causes accuracy issues in bf16 stage2 training. Therefore, reset the parameters of the last LN layer before training. This is a good practice in any case where we replace the classifier that follows the LN.

In addition, in case we are using only optimize lora, we need to force the training of the LN parameters that were reset.

Note that current fix uses plain initialization of final LN. A separate commit will provide support for zero3 initialization.

Change-Id: I323d8947907eb4a1cc0fa6354bdaf0cbbf33a68d

Bloom-560m model has high variance in its last LN layer weight.
This causes accuracy issues in bf16 stage2 training.
Therefore, reset the parameters of the last LN layer before training.
This is a good practice in any case where we replace the classifier that
follows the LN.

In addition, in case we are using only optimize lora, we need to force the
training of the LN parameters that were reset.

Note that current fix uses plain initialization of final LN.
A separate commit will provide support for zero3 initialization.

Change-Id: I323d8947907eb4a1cc0fa6354bdaf0cbbf33a68d
Signed-off-by: Moshe Island <misland@habana.ai>
Copy link
Contributor

@lekurile lekurile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@tjruwase tjruwase merged commit 185e25c into deepspeedai:master Oct 17, 2023
@mosheisland mosheisland deleted the 9_fix_bloom_stage2_bf16_acc branch November 22, 2023 07:52
hwchen2017 pushed a commit that referenced this pull request Jun 8, 2025
Bloom-560m model has high variance in its last LN layer weight.
This causes accuracy issues in bf16 stage2 training.
Therefore, reset the parameters of the last LN layer before training.
This is a good practice in any case where we replace the classifier that
follows the LN.

In addition, in case we are using only optimize lora, we need to force the
training of the LN parameters that were reset.

Note that current fix uses plain initialization of final LN.
A separate commit will provide support for zero3 initialization.

Change-Id: I323d8947907eb4a1cc0fa6354bdaf0cbbf33a68d

Signed-off-by: Moshe Island <misland@habana.ai>
Co-authored-by: Moshe Island <misland@habana.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants