-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epoch size #92
Comments
Hi! If so, you can change the parameter If you change this parameter, and you still want your loss being saved and plot ~at the same time (when you visualize it on Tensorboard) you need to change also the parameter |
thank you for your response @spagliarini |
I see. Then, until now the only way to stop the training I found is manual. So you just need to stop it earlier than 200k iterations. Just make sure that the generator is performing well enough by checking the preview. In #63 it was mentioned that good results were already obtained after 100k iterations, or earlier. Actually, this is the first time that I deal with Tensorflow and for what I found in the Tensorflow documentation it is possible to automatically stop the training session fixing a threshold for the loss. But I couldn't find a good one. Are you more familiar with Tensorflow? Is there a way to stop the training that is based on the number of iterations? |
How can I modify The iteration or epoch size to reduce the training time
The text was updated successfully, but these errors were encountered: