Epochs for prior and unclip? #96
Replies: 7 comments 15 replies
-
I trained with 150 at the beginning and it came out ok, second try i used 120 and turned out ok to ...I did noticed some over cooking in the 150 one fut only when increasing the prior steps up to 100 . now it's up to you ... but from what i saw somewhere around @120 it's fine. |
Beta Was this translation helpful? Give feedback.
-
Personally I got not time no train something yet :) So I cannot tell for sure what are recommended settings. 100 seems like a nice starting point. But if you have so much images... Try around 30-40 maybe? And increase batch size to make it faster (if your GPU can cope with that). |
Beta Was this translation helpful? Give feedback.
-
Concerning loss: try to increase learning rate, maybe the network convergence is too slow because of large dataset. There might be some other reasons though.. eventually we will work out best practices for training. |
Beta Was this translation helpful? Give feedback.
-
I invited Kodxana. maybe he can help you with the training on runpods |
Beta Was this translation helpful? Give feedback.
-
wait, how did you downloaded them ? cause there on the link its only the
captioning files ...
theres also this one with 4Mil +
https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset/viewer/tarungupta83--MidJourney_v5_Prompt_dataset/train?row=6
…On Sat, 10 Jun 2023 at 13:48, user425846 ***@***.***> wrote:
I used this dataset from huggingface
https://huggingface.co/datasets/wanng/midjourney-v5-202304-clean,
filtered by square aspect ration, resized the images. But i only downloaded
60k, there are more.
Also i used the dataset of the upscaled images only, as they will be
better on average than random ones from midjourney
—
Reply to this email directly, view it on GitHub
<#96 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABNVRRQYYBXLCPTAG2CYCTXKRGHZANCNFSM6AAAAAAZA5ADCI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Unfortunately not! I ran the training in a terminal within jupyter lab and
the next day all tabs were still open except the terminal, it had somehow
closed. So im not sure it even was an issue with the training script, modt
likely something with running it through jupyter labs
neutron_hare ***@***.***> schrieb am Sa. 10. Juni 2023 um
18:51:
… @user425846 <https://github.com/user425846> You said your training
crashed, do you remember the error message?
—
Reply to this email directly, view it on GitHub
<#96 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AX6YB4FGI3PKZ5A3PCOUNFLXKSQZPANCNFSM6AAAAAAZA5ADCI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@kodxana @user425846 did you guys had any success with the training ? Will you share the final model ? :) |
Beta Was this translation helpful? Give feedback.
-
Hi! Im currently training prior for a style with a large dataset (thousands of images). How many epochs would you recommend for both prior and unclip? Any recommendations how long i should be training these? Should i use the same epoch for both? Is it normal that the loss is not really decreasing while training prior?
Thanks for your help and for the awesome ui you guys built
Beta Was this translation helpful? Give feedback.
All reactions