v4.2.9
FLUX
Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!
We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.
Default workflows can be found in your workflow tab: FLUX Text to Image
and FLUX Image to Image
. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.
Required Dependencies
In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list. Currently invoke only supports unquantized models, and bitsandbytes nf4 quantized models.
- T5 encoder
- CLIP-L encoder
- FLUX transformer/unet
- FLUX VAE
Considerations
FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.
To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.
Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.
Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.
Below are additional details on which model to use based on your system:
- FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
- FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
- FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
- FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS
Running the Workflow
You can find a new default workflow in your workflows tab called FLUX Text to Image
. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.
- Navigate to the Workflows tab.
- Press the Workflow Library button at the top left of your screen.
- Select Default Workflows and choose the FLUX workflow you’d like to use.
The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count. If you are missing any models, use the "Starter Models" tab in the model manager to download and install FLUX Dev or Schnell.
We've also added a new default workflow named Flux Image to Image
. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.
Other Changes
- Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
- Enhancement: FLUX memory management improvements by @RyanJDick
- Feature: Add FLUX image-to-image and inpainting by @RyanJDick
- Feature: flux preview images by @brandonrising
- Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
- Fix: support checkpoint bundles containing more than the transformer by @brandonrising
Installation and Updating
To install or update to v4.2.9, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).
To update, select the same installation location. Your user data (images, models, etc) will be retained.
What's Changed
- Follow-up docker readme fixes by @ebr in #6661
- fix(ui): use empty string fallback if unable to parse prompts when creating style preset from existing image by @maryhipp in #6769
- Added support for bounding boxes in the Invocation API by @JPPhoto in #6781
- fix(ui): disable export button if no non-default presets by @maryhipp in #6773
- Brandon/flux model loading by @brandonrising in #6739
- build: remove broken scripts by @psychedelicious in #6783
- fix(ui): fix translations of model types in MM by @maryhipp in #6784
- Add selectedStylePreset to app parameters by @chainchompa in #6787
- feat(ui, nodes): add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp in #6794
- FLUX memory management improvements by @RyanJDick in #6791
- Fix source string in hugging face installs with subfolders by @brandonrising in #6797
- Add a new FAQ for converting checkpoints to diffusers by @lstein in #6736
- scripts: add allocate_vram script by @psychedelicious in #6617
- Add FLUX image-to-image and inpainting by @RyanJDick in #6798
- [MM] add API routes for getting & setting MM cache sizes by @lstein in #6523
- feat: flux preview images by @brandonrising in #6804
- Add install probes for T5_encoder and ClipTextModel by @lstein in #6800
- Build container image on-demand by @ebr in #6806
- feat: support checkpoint bundles containing more than the transformer by @brandonrising in #6808
- ui: translations update from weblate by @weblate in #6772
- Brandon/cast unquantized flux to bfloat16 by @brandonrising in #6815
Full Changelog: v4.2.8...v4.2.9