From 9c7cda2f274629d1e76c42f010ef4be134fde516 Mon Sep 17 00:00:00 2001 From: Xiaoyu Xu Date: Wed, 31 Jan 2024 14:15:33 +0800 Subject: [PATCH] Update onediffx README.md (#596) --- onediff_diffusers_extensions/README.md | 45 +++++++++++++++++++------- 1 file changed, 33 insertions(+), 12 deletions(-) diff --git a/onediff_diffusers_extensions/README.md b/onediff_diffusers_extensions/README.md index b038aaf57..c31ec19e2 100644 --- a/onediff_diffusers_extensions/README.md +++ b/onediff_diffusers_extensions/README.md @@ -1,11 +1,14 @@ -# OneDiffX +# OneDiffX (for HF diffusers) -OneDiffX include multiple popular accelerated versions of the AIGC algorithm, such as DeepCache, which you would have a hard time finding elsewhere. +OneDiffX is a OneDiff Extension for HF diffusers. It provides some acceleration utilities, such as DeepCache. - [Install and Setup](#install-and-setup) +- [compile_pipe](#compile_pipe) - [DeepCache Speedup](#deepcache-speedup) - [Stable Diffusion XL](#run-stable-diffusion-xl-with-onediffx) - - [Stable Diffuison 1.5](#run-stable-diffusion-15-with-onediffx) + - [Stable Diffusion 1.5](#run-stable-diffusion-15-with-onediffx) +- [LoRA loading and switching speed up](#lora-loading-and-switching-speed-up) +- [Quantization](#quantization) - [Contact](#contact) ## Install and setup @@ -18,6 +21,24 @@ OneDiffX include multiple popular accelerated versions of the AIGC algorithm, su git clone https://github.com/siliconflow/onediff.git cd onediff_diffusers_extensions && python3 -m pip install -e . ``` +## compile_pipe +Compile diffusers pipeline with `compile_pipe`. +``` +import torch +from diffusers import StableDiffusionXLPipeline + +from onediffx import compile_pipe + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + variant="fp16", + use_safetensors=True +) +pipe.to("cuda") + +pipe = compile_pipe(pipe) +``` ## DeepCache speedup @@ -55,7 +76,7 @@ deepcache_output = pipe( ).images[0] ``` -### Run Stable Diffusion 1.5 with OneDiff diffusers extensions +### Run Stable Diffusion 1.5 with OneDiffX ```python import torch @@ -129,14 +150,6 @@ deepcache_output = pipe( export_to_video(deepcache_output, "generated.mp4", fps=7) ``` -### Quantization - -**Note**: Quantization feature is only supported by **OneDiff Enterprise**. - -OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss. - -If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Hugginface/siliconflow](https://huggingface.co/siliconflow). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license. - ## LoRA loading and switching speed up OneDiff provides a faster implementation of loading LoRA, by invoking `onediffx.utils.lora.load_and_fuse_lora` you can load and fuse LoRA to pipeline. @@ -178,6 +191,14 @@ We compared different methods of loading LoRA. The comparison of loading LoRA on If you want to unload LoRA and then load a new LoRA, you only need to call `load_and_fuse_lora` again. There is no need to manually call `unfuse_lora`, cause it will be called implicitly in `load_and_fuse_lora`. You can also manually call `unfuse_lora` to restore the model's weights. +## Quantization + +**Note**: Quantization feature is only supported by **OneDiff Enterprise**. + +OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss. + +If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Hugginface/siliconflow](https://huggingface.co/siliconflow). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license. + ## Contact For users of OneDiff Community, please visit [GitHub Issues](https://github.com/siliconflow/onediff/issues) for bug reports and feature requests.