You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Create train_dreambooth_inpaint.py
train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training
* Update train_dreambooth_inpaint.py
refactored train_dreambooth_inpaint with black
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
Fix prior preservation
* add instructions to readme, fix SD2 compatibility
The script is also compatible with prior preservation loss and gradient checkpointing
338
+
339
+
### Training with prior-preservation loss
340
+
341
+
Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
342
+
According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases.
The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
403
+
Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
404
+
405
+
___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
0 commit comments