Skip to content

v4.2.9

Compare
Choose a tag to compare
@brandonrising brandonrising released this 05 Sep 20:58
· 2085 commits to main since this release

FLUX

Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!

We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.

Default workflows can be found in your workflow tab: FLUX Text to Image and FLUX Image to Image. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.

Required Dependencies

Screenshot 2024-09-05 at 4 48 24 PM

In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list. Currently invoke only supports unquantized models, and bitsandbytes nf4 quantized models.

  • T5 encoder
  • CLIP-L encoder
  • FLUX transformer/unet
  • FLUX VAE

Considerations

FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.

To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.

Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.

Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
  • FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
  • FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS

Running the Workflow

You can find a new default workflow in your workflows tab called FLUX Text to Image. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.

  • Navigate to the Workflows tab.
  • Press the Workflow Library button at the top left of your screen.
  • Select Default Workflows and choose the FLUX workflow you’d like to use.

The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count. If you are missing any models, use the "Starter Models" tab in the model manager to download and install FLUX Dev or Schnell.

Screenshot 2024-09-04 141124

We've also added a new default workflow named Flux Image to Image. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.

Screenshot 2024-09-04 140846

Other Changes

  • Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
  • Enhancement: FLUX memory management improvements by @RyanJDick
  • Feature: Add FLUX image-to-image and inpainting by @RyanJDick
  • Feature: flux preview images by @brandonrising
  • Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
  • Fix: support checkpoint bundles containing more than the transformer by @brandonrising

Installation and Updating

To install or update to v4.2.9, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v4.2.8...v4.2.9