-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experience enhancement of paddlenlp.transformers module #2356
Labels
Comments
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。 |
@guoshengCS 上面列表的内容我们都是在部分模型(并非是所有的模型)上面完成了,而且已经有其他任务安排了,所以这个 issue 可暂时关掉。 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
完善 attention_mask 模块
transformers 模型支持 2D、3D、4D 的 attention_mask 输入 #1850
Supports attention_mask with 2D、3D and 4D Tensors #2005
完善 model_input_names 字段
模型下游任务支持传入 labels 并输出 loss
返回 hidden_states,attentions 等模型内部信息
Support more model outputs for BERT/ERNIE/RoBERTa #2583
支持词表扩充
add resize_token_embeddings to Class PretrainedModel #2423
支持输入序列最大长度扩充
多模态key value打标模型只能输入512个token #2490
Support resize_position_embeddings for layoutlm-models #2513
支持模型接收 inputs_embeds 字段
BertModel有没有input_embeds的替代参数啊? #2580
支持修改 word_embeddings
from_pretrianed
加载内置模型时缓存config文件,保证离线时可以使用缓存目录 @wj-Mcatfrom_pretrianed
可以只使用 config 文件创建模型并使用随机初始化config 获取以及如何从config创建模型
The text was updated successfully, but these errors were encountered: