-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce issues #8
Comments
Here are my training logs. |
As you can see, our checkpoint provided in the shared google drive link achieves Fortunately, I found my experimental records. My FiD50K with CFG on 300K is It is weird. |
Yes. I tested your checkpoint and it gives me something around 2.42, which is reasonable. However the model I trained cannot get this number. Do you have any idea about what might be the cause? |
Did you use the same config for evaluation as in training? |
Yes, I used the same config you have provided. |
OK. |
Yes that's correct. Does the loss in my training log look reasonable? |
Your loss seems slightly higher than mine. This may cause the difference on performance. Here is my log file: |
Hi authors! Thank you so much for your code and scripts. I am trying to replicate your DiM-H results. I trained the model on ImageNet with the given script
configs/imagenet256_H_DiM.py
for 300k epochs, but I only get a fid score of 2.7. While it is not a huge difference from your checkpoint, it is still a bit weird to see such a difference since I have used the exact same settings. I would really appreciate it if you have any insights for my issue.Here are the FID results.
Do you know what might be the issue? Thanks a lot for your help!
The text was updated successfully, but these errors were encountered: