From 55747607d6c6a52cb3eb3e6f8e06c07dba5a6f18 Mon Sep 17 00:00:00 2001 From: linoytsaban Date: Tue, 27 Aug 2024 14:03:31 +0300 Subject: [PATCH 1/4] improve readme --- examples/dreambooth/README_flux.md | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/examples/dreambooth/README_flux.md b/examples/dreambooth/README_flux.md index 952d86a1f2f0..7afc3fdc3586 100644 --- a/examples/dreambooth/README_flux.md +++ b/examples/dreambooth/README_flux.md @@ -8,7 +8,10 @@ The `train_dreambooth_flux.py` script shows how to implement the training proced > > Flux can be quite expensive to run on consumer hardware devices and as a result finetuning it comes with high memory requirements - > a LoRA with a rank of 16 (w/ all components trained) can exceed 40GB of VRAM for training. -> For more tips & guidance on training on a resource-constrained device please visit [`@bghira`'s guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md) + +> For more tips & guidance on training on a resource-constrained device please check out these great guides and trainers for FLUX: +> 1) [`@bghira`'s guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md) +> 2) [`ostris`'s guide](https://github.com/ostris/ai-toolkit?tab=readme-ov-file#flux1-training) > [!NOTE] @@ -100,6 +103,7 @@ accelerate launch train_dreambooth_flux.py \ --instance_prompt="a photo of sks dog" \ --resolution=1024 \ --train_batch_size=1 \ + --guidance_scale=1 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-4 \ --report_to="wandb" \ @@ -120,15 +124,23 @@ To better track our training experiments, we're using the following flags in the > [!NOTE] > If you want to train using long prompts with the T5 text encoder, you can use `--max_sequence_length` to set the token limit. The default is 77, but it can be increased to as high as 512. Note that this will use more resources and may slow down the training in some cases. -> [!TIP] -> You can pass `--use_8bit_adam` to reduce the memory requirements of training. Make sure to install `bitsandbytes` if you want to do so. - ## LoRA + DreamBooth [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) is a popular parameter-efficient fine-tuning technique that allows you to achieve full-finetuning like performance but with a fraction of learnable parameters. Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment. +### Prodigy Optimizer +Prodigy is an adaptive optimizer that dynamically adjusts the learning rate learned parameters based on past gradients, allowing for more efficient convergence. +By using prodigy we can "eliminate" the need for manual learning rate tuning. read more [here](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers). + +to use prodigy, specify +```bash +--optimizer="prodigy" +``` +> [!TIP] +> When using prodigy it's generally good practice to set- `--learning_rate=1.0` + To perform DreamBooth with LoRA, run: ```bash @@ -144,6 +156,7 @@ accelerate launch train_dreambooth_lora_flux.py \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ + --guidance_scale=1 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-5 \ --report_to="wandb" \ @@ -162,6 +175,7 @@ Alongside the transformer, fine-tuning of the CLIP text encoder is also supporte To do so, just specify `--train_text_encoder` while launching training. Please keep the following points in mind: > [!NOTE] +> This is still an experimental feature. > FLUX.1 has 2 text encoders (CLIP L/14 and T5-v1.1-XXL). By enabling `--train_text_encoder`, fine-tuning of the **CLIP encoder** is performed. > At the moment, T5 fine-tuning is not supported and weights remain frozen when text encoder training is enabled. @@ -180,6 +194,7 @@ accelerate launch train_dreambooth_lora_flux.py \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ + --guidance_scale=1 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-5 \ --report_to="wandb" \ @@ -191,5 +206,10 @@ accelerate launch train_dreambooth_lora_flux.py \ --push_to_hub ``` +## Memory Optimizations +### 8-bit-Adam Optimizer +### latent caching +### FP8 training + ## Other notes -Thanks to `bghira` for their help with reviewing & insight sharing ♥️ \ No newline at end of file +Thanks to `bghira` and `ostris` for their help with reviewing & insight sharing ♥️ \ No newline at end of file From b4816946a58c0722a2dbc336452c821ba06a2231 Mon Sep 17 00:00:00 2001 From: linoytsaban Date: Tue, 27 Aug 2024 14:06:31 +0300 Subject: [PATCH 2/4] improve readme --- examples/dreambooth/README_flux.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/examples/dreambooth/README_flux.md b/examples/dreambooth/README_flux.md index 7afc3fdc3586..40dd3fde434c 100644 --- a/examples/dreambooth/README_flux.md +++ b/examples/dreambooth/README_flux.md @@ -105,7 +105,8 @@ accelerate launch train_dreambooth_flux.py \ --train_batch_size=1 \ --guidance_scale=1 \ --gradient_accumulation_steps=4 \ - --learning_rate=1e-4 \ + --optimizer="prodigy" \ + --learning_rate=1. \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ @@ -158,7 +159,8 @@ accelerate launch train_dreambooth_lora_flux.py \ --train_batch_size=1 \ --guidance_scale=1 \ --gradient_accumulation_steps=4 \ - --learning_rate=1e-5 \ + --optimizer="prodigy" \ + --learning_rate=1. \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ @@ -196,7 +198,8 @@ accelerate launch train_dreambooth_lora_flux.py \ --train_batch_size=1 \ --guidance_scale=1 \ --gradient_accumulation_steps=4 \ - --learning_rate=1e-5 \ + --optimizer="prodigy" \ + --learning_rate=1. \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ From 2bf883564815034fd9dde7543de399eacb381565 Mon Sep 17 00:00:00 2001 From: linoytsaban Date: Tue, 27 Aug 2024 14:28:59 +0300 Subject: [PATCH 3/4] improve readme --- examples/dreambooth/README_flux.md | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/examples/dreambooth/README_flux.md b/examples/dreambooth/README_flux.md index 40dd3fde434c..897fa9ca646a 100644 --- a/examples/dreambooth/README_flux.md +++ b/examples/dreambooth/README_flux.md @@ -9,10 +9,9 @@ The `train_dreambooth_flux.py` script shows how to implement the training proced > Flux can be quite expensive to run on consumer hardware devices and as a result finetuning it comes with high memory requirements - > a LoRA with a rank of 16 (w/ all components trained) can exceed 40GB of VRAM for training. -> For more tips & guidance on training on a resource-constrained device please check out these great guides and trainers for FLUX: +> For more tips & guidance on training on a resource-constrained device and general good practices please check out these great guides and trainers for FLUX: > 1) [`@bghira`'s guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md) -> 2) [`ostris`'s guide](https://github.com/ostris/ai-toolkit?tab=readme-ov-file#flux1-training) - +> 2) [`ostris`'s guide](https://github.com/ostris/ai-toolkit?tab=readme-ov-file#flux1-training) > [!NOTE] > **Gated model** @@ -210,9 +209,15 @@ accelerate launch train_dreambooth_lora_flux.py \ ``` ## Memory Optimizations +As mentioned, Flux Dreambooth LoRA training is very memory intensive Here are some options(some still experimental) for a more memory efficient training. ### 8-bit-Adam Optimizer +When training with `AdamW`(doesnt apply to `prodigy`) You can pass `--use_8bit_adam` to reduce the memory requirements of training. +Make sure to install `bitsandbytes` if you want to do so. ### latent caching +When training w/o validation runs, we can pre-encode the training images with the vae, and then delete it to free up some memory. +to enable `latent_caching`, first, use the version in [this PR](https://github.com/huggingface/diffusers/blob/1b195933d04e4c8281a2634128c0d2d380893f73/examples/dreambooth/train_dreambooth_lora_flux.py), and then pass `--cache_latents` ### FP8 training - +This is still highly experimental as it may not work for all hardware, but you can try FP8 precision training by following the steps +[here](https://gist.github.com/sayakpaul/f0358dd4f4bcedf14211eba5704df25a). ## Other notes Thanks to `bghira` and `ostris` for their help with reviewing & insight sharing ♥️ \ No newline at end of file From be272ce62ccad655b0e3febaf0cc4ff6e3c54699 Mon Sep 17 00:00:00 2001 From: linoytsaban Date: Thu, 29 Aug 2024 14:07:40 +0300 Subject: [PATCH 4/4] improve readme --- examples/dreambooth/README_flux.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/examples/dreambooth/README_flux.md b/examples/dreambooth/README_flux.md index 897fa9ca646a..eaa0ebd80666 100644 --- a/examples/dreambooth/README_flux.md +++ b/examples/dreambooth/README_flux.md @@ -209,15 +209,20 @@ accelerate launch train_dreambooth_lora_flux.py \ ``` ## Memory Optimizations -As mentioned, Flux Dreambooth LoRA training is very memory intensive Here are some options(some still experimental) for a more memory efficient training. +As mentioned, Flux Dreambooth LoRA training is very memory intensive Here are some options (some still experimental) for a more memory efficient training. +### Image Resolution +An easy way to mitigate some of the memory requirements is through `--resolution`. `--resolution` refers to the resolution for input images, all the images in the train/validation dataset are resized to this. +Note that by default, images are resized to resolution of 512, but it's good to keep in mind in case you're accustomed to training on higher resolutions. +### Gradient Checkpointing and Accumulation +* `--gradient accumulation` refers to the number of updates steps to accumulate before performing a backward/update pass. +by passing a value > 1 you can reduce the amount of backward/update passes and hence also memory reqs. +* with `--gradient checkpointing` we can save memory by not storing all intermediate activations during the forward pass. +Instead, only a subset of these activations (the checkpoints) are stored and the rest is recomputed as needed during the backward pass. Note that this comes at the expanse of a slower backward pass. ### 8-bit-Adam Optimizer -When training with `AdamW`(doesnt apply to `prodigy`) You can pass `--use_8bit_adam` to reduce the memory requirements of training. +When training with `AdamW`(doesn't apply to `prodigy`) You can pass `--use_8bit_adam` to reduce the memory requirements of training. Make sure to install `bitsandbytes` if you want to do so. ### latent caching When training w/o validation runs, we can pre-encode the training images with the vae, and then delete it to free up some memory. to enable `latent_caching`, first, use the version in [this PR](https://github.com/huggingface/diffusers/blob/1b195933d04e4c8281a2634128c0d2d380893f73/examples/dreambooth/train_dreambooth_lora_flux.py), and then pass `--cache_latents` -### FP8 training -This is still highly experimental as it may not work for all hardware, but you can try FP8 precision training by following the steps -[here](https://gist.github.com/sayakpaul/f0358dd4f4bcedf14211eba5704df25a). ## Other notes Thanks to `bghira` and `ostris` for their help with reviewing & insight sharing ♥️ \ No newline at end of file