Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教训练时间 #46

Open
wangupupup opened this issue Sep 22, 2024 · 2 comments
Open

请教训练时间 #46

wangupupup opened this issue Sep 22, 2024 · 2 comments

Comments

@wangupupup
Copy link

作者您好!我想在CelebAHQ上重新训练一下PUT,训练集有2.7万张,请问第一阶段PQVAE和第二阶段UQ-Transformer大概要训练多久呢?我之前训练过 第一阶段训练了150个epoch 第二阶段训练了180个epoch 两个阶段共训练了将近一个周 但修复结果不太理想 如下:
image

@liuqk3
Copy link
Owner

liuqk3 commented Sep 23, 2024

感谢关注。在FFHQ上实验时,我印象中8张RTX3090,P-VQVAE大概需要两天,UQ-Transformer大概需要一周。第一阶段P-VQVAE训练较快,100个epoch应该差不多。第二阶段我训练了800个epoch。但数据量多一点话的,epoch数目可以减少,比如ImageNet在250个epoch实际上已经差不多了。目前你给出的这张结果,看着还是可以的。有两点原因可以导致这种现象:(1)这是训练时保存的图片吗?如果是的话,建议用测试脚本试一下训练的模型,因为训练时的脚本在自动保存修复的图片时,各个超参应该不是最优的,而是我最初凭感觉设置的;(2)UQ-Transformer还没有完全收敛。

@wangupupup
Copy link
Author

谢谢您的回复!这张图片是我用测试脚本测试出来的,看样应该是UQ-Transformer没有收敛,我再接着继续训练看看。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants