Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enterprise doc add sdwebui #705

Merged
merged 8 commits into from
Mar 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions README_ENTERPRISE.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ OneDiff Enterprise offers a quantization method that reduces memory usage, incre
- [SD-1.5](#SD-1.5)
- [SDXL](#SDXL)
- [SVD](#SVD)
- [Stable Diffusion WebUI with OneDiff Enterprise](#stable-diffusion-webui-with-onediff-enterprise)
- [SD-1.5](#sd)
- [Diffusers with OneDiff Enterprise](#diffusers-with-onediff-enterprise)
- [SDXL](#SDXL)
- [SVD](#SVD)
Expand Down Expand Up @@ -169,6 +171,45 @@ wget https://huggingface.co/siliconflow/stable-video-diffusion-xt-comfyui-deepca
- Workflow: [SVD + DeepCache](https://huggingface.co/siliconflow/stable-video-diffusion-xt-comfyui-deepcache-int8/blob/main/svd-int8-deepcache-workflow.png)


## Stable Diffusion WebUI with OneDiff Enterprise
doombeaker marked this conversation as resolved.
Show resolved Hide resolved

If you are using the official weight of StableDiffusionXL, just tick the **Model Quantization(int8) Speed Up** option.

<img src="./imgs/Enterprise_Tutorial_WebUI.png">

### SD-1.5

#### Scripts

Run quantize-sd-fast.py by command to get quantized model:
doombeaker marked this conversation as resolved.
Show resolved Hide resolved

```python3
python3 quantize-sd-fast.py \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里需要更新一下,直接把QuantPipeline的注释拿过来解释一下就行

--model /path/to/your/sd/model \
--quant_model /path/to/save/quantized/model \
--height 512 --width 512 \
--use_safetensors
```

The meaning of each parameter is as follows:

`--model` Specifies the path of the model to be quantified

`--quant_model` Specifies the path to save the quantized model

`--height --width` Specify the size of the output image when quantizing

`--use_safetensors` If specified, the quantized model will be saved as safetensors

`--format` must be one of ['diffusers', 'sd'], and defaults to 'sd'. If set to 'diffusers', the model will be saved in the format of huggingface diffusers; if set to sd, the model will be saved in the format of StableDiffusion single file.

After the script has finished running, you will obtain the quantized model named `model.safetensors` in the folder specified by --quant_model, and now you can load the quantized model in Stable Diffusion WebUI.

<img src="./imgs/Enterprise_Tutorial_WebUI_Script.png">

> Note: When you are using a quantized model, you should **not** tick the **Model Quantization(int8) Speed Up** option.


## Diffusers with OneDiff Enterprise

### SDXL
Expand Down
Binary file added imgs/Enterprise_Tutorial_WebUI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added imgs/Enterprise_Tutorial_WebUI_Script.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.