-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss is not going down #33
Comments
I have the same problem with you. Have you already solved this problem? |
we have this problem, too. |
This is a fairly standard phenomenon in diffusion training since the model samples timesteps uniformly in the forward pass. If the model has already seen and optimized a timestep several times, it will yield a lower loss score if presented with the same timestep later on compared to if it's presented with some other timestep it hasn't seen much before. One quick experiment you can implement to confirm this: try freezing the timestep vector and then train your model normally by iterating through your dataset. The loss should consistently go down. |
Got it! Thanks for your reply! |
Hello dear authors! I trained ddim in another dataset. In 1200 epoches,, the loss still seems not going down constantly while sometimes loss became large. I wonder whether it is normal?
The text was updated successfully, but these errors were encountered: