We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bias
bias of linear layers in qwen2 model is hard coded as following:
qwen2
transformers/src/transformers/models/qwen2/modeling_qwen2.py
Lines 217 to 219 in 85345bb
Lines 271 to 274 in 85345bb
It would be good to make bias optionally configurable through a config file to ensure compatibility with the latest models. (e.g. llama)
bias is optional in llama model as following:
transformers/src/transformers/models/llama/modeling_llama.py
Lines 286 to 288 in 85345bb
I'll submit PR for this feature
The text was updated successfully, but these errors were encountered:
attention_qkv_bias
cc @ArthurZucker
Sorry, something went wrong.
Answered on the PR~
Successfully merging a pull request may close this issue.
Feature request
bias
of linear layers inqwen2
model is hard coded as following:transformers/src/transformers/models/qwen2/modeling_qwen2.py
Lines 217 to 219 in 85345bb
transformers/src/transformers/models/qwen2/modeling_qwen2.py
Lines 271 to 274 in 85345bb
It would be good to make bias optionally configurable through a config file to ensure compatibility with the latest models. (e.g. llama)
Motivation
bias
is optional in llama model as following:transformers/src/transformers/models/llama/modeling_llama.py
Lines 286 to 288 in 85345bb
Your contribution
I'll submit PR for this feature
The text was updated successfully, but these errors were encountered: