Skip to content

🧨 Diffusers now uses 🤗 PEFT, new tuning methods, better quantization support, higher flexibility and more

Compare
Choose a tag to compare
@BenjaminBossan BenjaminBossan released this 03 Nov 09:42
· 528 commits to main since this release
02f0a4c

Highlights

Integration with diffusers

🧨 Diffusers now leverage PEFT as a backend for LoRA inference for Stable Diffusion models (#873, #993, #961). Relevant PRs on 🧨 Diffusers are huggingface/diffusers#5058, huggingface/diffusers#5147, huggingface/diffusers#5151 and huggingface/diffusers#5359. This helps in unlocking a vast number of practically demanding use cases around adapter-based inference 🚀. Now you can do the following with easy-to-use APIs and it supports different checkpoint formats (Diffusers format, Kohya format ...):

  1. use multiple LoRAs
  2. switch between them instantaneously
  3. scale and combine them
  4. merge/unmerge
  5. enable/disable

For details, refer to the documentation at Inference with PEFT.

New tuning methods

Other notable additions

  • Allow merging of LoRA weights when using 4bit and 8bit quantization (bitsandbytes), thanks to @jiqing-feng (#851, #875)
  • IA³ now supports 4bit quantization thanks to @His-Wardship (#864)
  • We increased the speed of adapter layer initialization: This should be most notable when creating a PEFT LoRA model on top of a large base model (#887, #915, #994)
  • More fine-grained control when configuring LoRA: It is now possible to have different ranks and alpha values for different layers (#873)

Experimental features

  • For some adapters like LoRA, it is now possible to activate multiple adapters at the same time (#873)

Breaking changes

  • It is no longer allowed to create a LoRA adapter with rank 0 (r=0). This used to be possible, in which case the adapter was ignored.

What's Changed

As always, a bunch of small improvements, bug fixes and doc improvements were added. We thank all the external contributors, both new and recurring. Below is the list of all changes since the last release.

New Contributors

Full Changelog: v0.5.0...v0.6.0