We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者您好,我有三个问题 1、我在运行您的代码过程中想直接进行下游任务,不进行预训练,从而对比预训练的效果,不知道是否可以做到,如果可以,该怎么修改代码呢? 2:我是否可以用传统的bert-base-uncased作为预训练模型进行预训练操作,再进行下游任务,从而与VLP-MABSA预训练模型进行对比? 3:我改了预训练文件的mlm_enabled、mrm_enabled等参数,预训练结束后生成了model0和model40两个文件夹,我将下游任务15_pretrain_full.sh的--model_config和--checkpoint改成了相应的参数(model0和model40两个文件夹下的bin文件和config.json文件),但在执行下游任务15_pretrain_full.sh结果值并没有发生变化,不知道是否是我修改的不对?还是有其他原因 如果可以,希望您能看下,万分感谢
The text was updated successfully, but these errors were encountered:
不使用预训练模型的话,直接在下游任务的脚本里替换checkpoint为原始的bart模型即可,或者直接在脚本里把checkpoint一行删除也可以
Sorry, something went wrong.
No branches or pull requests
作者您好,我有三个问题
1、我在运行您的代码过程中想直接进行下游任务,不进行预训练,从而对比预训练的效果,不知道是否可以做到,如果可以,该怎么修改代码呢?
2:我是否可以用传统的bert-base-uncased作为预训练模型进行预训练操作,再进行下游任务,从而与VLP-MABSA预训练模型进行对比?
3:我改了预训练文件的mlm_enabled、mrm_enabled等参数,预训练结束后生成了model0和model40两个文件夹,我将下游任务15_pretrain_full.sh的--model_config和--checkpoint改成了相应的参数(model0和model40两个文件夹下的bin文件和config.json文件),但在执行下游任务15_pretrain_full.sh结果值并没有发生变化,不知道是否是我修改的不对?还是有其他原因
如果可以,希望您能看下,万分感谢
The text was updated successfully, but these errors were encountered: