Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix onediff comfy nodes docs #988

Merged
merged 23 commits into from
Jul 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 74 additions & 97 deletions onediff_comfy_nodes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,148 +39,125 @@ Updated on January 23, 2024. Device: RTX 3090
- [OneDiff ComfyUI Nodes](#onediff-comfyui-nodes)
- [Documentation](#documentation)
- [Installation Guide](#installation-guide)
- [Setup Community Edition](#setup-community-edition)
- [Setup Enterprise Edition](#setup-enterprise-edition)
- [Basic Node Usage](#basic-node-usage)
- [Model Speedup](#model-speedup)
- [Load Checkpoint - OneDiff](#load-checkpoint---onediff)
- [Quantization](#quantization)
- [Compiler Cache](#compiler-cache)
- [Avoid compilation time for online serving](#avoid-compilation-time-for-online-serving)
- [OneDiff Community Examples](#onediff-community-examples)
- [IPAdapter](#ipadapter)
- [LoRA](#lora)
- [ControlNet](#controlnet)
- [SVD](#svd)
- [DeepCache](#deepcache)
- [InstantID](#instantid)
- [Quantization](#quantization)
- [Tutorials](#tutorials)
- [Contact](#contact)


### Installation Guide
This guide provides two methods to install ComfyUI and integrate it with the OneDiff library: via the Comfy CLI or directly from GitHub.

Please install and set up [ComfyUI](https://github.com/comfyanonymous/ComfyUI) first, and then:

#### Setup Community Edition

<details close>
<summary>Setup Community Edition</summary>
<details close>
<summary> Option 1: Installing via Comfy CLI </summary>

1. Install OneFlow Community
* Install OneFlow Community(CUDA 11.x)
1. **Install Comfy CLI**:
```shell
pip install comfy-cli
```

```bash
pip install --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu118
```
2. **Install ComfyUI**:
```shell
comfy install
```

* Install OneFlow Community(CUDA 12.x)
3. **Install OneDiff Comfy Nodes**:
```shell
comfy node install onediff_comfy_nodes
```
By default, this installs the oneflow backend. You can add other backends if needed; please refer to the OneDiff GitHub repository [here](https://github.com/siliconflow/onediff?tab=readme-ov-file#install-a-compiler-backend).

```bash
pip install --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu121
```
2. Install OneDiff
```bash
git clone https://github.com/siliconflow/onediff.git
cd onediff && pip install -e .
```
</details>

3. Install onediff_comfy_nodes for ComfyUI
<details close>
<summary> Option 2: Installing via GitHub </summary>

First, install and set up [ComfyUI](https://github.com/comfyanonymous/ComfyUI), and then follow these steps:

1. **Clone OneDiff Repository**:
```shell
git clone https://github.com/siliconflow/onediff.git
```

2. **Install OneDiff**:
```shell
cd onediff && pip install -e .
```

3. **Integrate OneDiff Comfy Nodes with ComfyUI**:
- **Symbolic Link (Recommended)**:
```shell
ln -s $(pwd)/onediff_comfy_nodes path/to/ComfyUI/custom_nodes/
```
- **Copy Directory**:
```shell
cp -r onediff_comfy_nodes path/to/ComfyUI/custom_nodes/
```

4. **Install a Compiler Backend**

For instructions on installing a compiler backend for OneDiff, please refer to the OneDiff GitHub repository [here](https://github.com/siliconflow/onediff?tab=readme-ov-file#install-a-compiler-backend).

```bash
cd onediff
ln -s $(pwd)/onediff_comfy_nodes path/to/ComfyUI/custom_nodes/
# or
# cp -r onediff_comfy_nodes path/to/ComfyUI/custom_nodes/
```

</details>

#### Setup Enterprise Edition

1. [Install OneDiff Enterprise](../README_ENTERPRISE.md#install-onediff-enterprise)

2. Install onediff_comfy_nodes for ComfyUI
```bash
git clone https://github.com/siliconflow/onediff.git
cd onediff
cp -r onediff_comfy_nodes path/to/ComfyUI/custom_nodes/
```

</details>


### Basic Node Usage

**Note** All the images in this section can be loaded directly into ComfyUI. You can load them in ComfyUI to get the full workflow.

#### Model Speedup
![](./docs/model_speedup_basic.png)

#### Load Checkpoint - OneDiff

"Load Checkpoint - OneDiff" is the optimized version of "LoadCheckpoint", designed to accelerate the inference speed without any awareness required. It maintains the same input and output as the original node.

![](workflows/model-speedup.png)
![](./docs/model-speedup.png)

The "Load Checkpoint - OneDiff" node set `vae_speedup` : `enable` to enable VAE acceleration.


### Quantization

**Note**: Quantization feature is only supported by **OneDiff Enterprise**.

OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss.

If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Online Quantization for ComfyUI](./ComfyUI_Online_Quantization.md). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license.

![](workflows/onediff_quant_base.png)

### Compiler Cache
#### Avoid compilation time for online serving
The `"Load Checkpoint - OneDiff"` node automatically caches compiled results locally in the default directory `ComfyUI/input/graphs`. To save graphs in a custom directory, utilize `export COMFYUI_ONEDIFF_SAVE_GRAPH_DIR="/path/to/save/graphs"`.
**Avoid compilation time for online serving**

## OneDiff Community Examples
```shell
# Set custom directory for saving graphs in ComfyUI with OneFlow backend
export COMFYUI_ONEDIFF_SAVE_GRAPH_DIR="/path/to/save/graphs"

### IPAdapter
> doc link: [Accelerating cubiq/ComfyUI_IPAdapter_plus with OneDiff](./modules/oneflow/hijack_ipadapter_plus/README.md)
# Enable graph cache for faster compilation
export TORCHINDUCTOR_FX_GRAPH_CACHE=1

### LoRA
# Specify persistent cache directory for Torchinductor
export TORCHINDUCTOR_CACHE_DIR=~/.torchinductor_cache
```

This example demonstrates how to utilize LoRAs. You have the flexibility to modify the LoRA models or adjust their strength without the need for recompilation.

[Lora Speedup](workflows/model-speedup-lora.png)

### ControlNet

> doc link: [ControlNet](https://github.com/siliconflow/onediff/tree/main/onediff_comfy_nodes/workflows/ControlNet)


While there is an example demonstrating OpenPose ControlNet, it's important to note that OneDiff seamlessly supports a wide range of ControlNet types, including depth mapping, canny, and more.

[ControlNet Speedup](workflows/ControlNet/controlnet_onediff.png)

### SVD
> doc link: [SVD](https://github.com/siliconflow/onediff/tree/main/onediff_comfy_nodes/workflows/SVD)

This example illustrates how OneDiff can be used to enhance the performance of a video model, specifically in the context of text-to-video generation using SVD. Furthermore, it is compatible with [SVD 1.1](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1).

[SVD Speedup](workflows/text-to-video-speedup.png)

### DeepCache

DeepCache is an innovative algorithm that substantially boosts the speed of diffusion models, achieving an approximate 2x improvement. When used in conjunction with OneDiff, it further accelerates the diffusion model to approximately 3x.

Here are the example of applying DeepCache to SD and SVD models.

[Module DeepCache SpeedUp on SD](workflows/deep-cache.png)

[Module DeepCache SpeedUp on SVD](workflows/svd-deepcache.png)
### Quantization

[Module DeepCache SpeedUp on LoRA](workflows/lora_deepcache/README.md)
**Note**: Quantization feature is only supported by **OneDiff Enterprise**.

OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss.

### InstantID
If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Online Quantization for ComfyUI](./docs/ComfyUI_Online_Quantization.md). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license.

> doc link: [Accelerating cubiq/ComfyUI_InstantID with OneDiff](./modules/oneflow/hijack_comfyui_instantid/README.md)

> doc link: [Accelerating ZHO-ZHO-ZHO/ComfyUI-InstantID with OneDiff](./workflows/ComfyUI_InstantID_OneDiff.md)
## Tutorials

- [Accelerate SD3 with onediff](./docs/sd3/README.md)
- [First Switch Lora](./docs/lora.md)
- [Accelerate cubiq/PuLID_ComfyUI](./docs/README.md)
- [Accelerate cubiq/ComfyUI_IPAdapter_plus](./docs/README.md)
- [Accelerate cubiq/ComfyUI_InstantID](./docs/README.md)
- [Accelerate ControlNet](./docs/ControlNet/README.md)
- [SVD](./docs/SVD/README.md)
- [DeepCache](./docs/lora_deepcache/README.md)

## Contact

Expand Down
1 change: 0 additions & 1 deletion onediff_comfy_nodes/_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
"is_disable_oneflow_backend",
]


# https://github.com/comfyanonymous/ComfyUI/blob/master/folder_paths.py#L9
os.environ["COMFYUI_ROOT"] = folder_paths.base_path
_default_backend = os.environ.get("ONEDIFF_COMFY_NODES_DEFAULT_BACKEND", "oneflow")
Expand Down
62 changes: 30 additions & 32 deletions onediff_comfy_nodes/_nodes.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
from typing import Optional, Tuple
import folder_paths
import torch
import comfy
import uuid
from nodes import CheckpointLoaderSimple, ControlNetLoader
from ._config import is_disable_oneflow_backend
Expand Down Expand Up @@ -81,8 +80,14 @@ class ModelSpeedup(SpeedupMixin):
@classmethod
def INPUT_TYPES(s):
return {
"required": {"model": ("MODEL",), "inplace": ([False, True],),},
"optional": {"custom_booster": ("CUSTOM_BOOSTER",),},
"required": {"model": ("MODEL",),},
"optional": {
"custom_booster": ("CUSTOM_BOOSTER",),
"inplace": (
"BOOLEAN",
{"default": True, "label_on": "yes", "label_off": "no"},
),
},
}

RETURN_TYPES = ("MODEL",)
Expand All @@ -92,8 +97,14 @@ class VaeSpeedup(SpeedupMixin):
@classmethod
def INPUT_TYPES(s):
return {
"required": {"vae": ("VAE",), "inplace": ([False, True],),},
"optional": {"custom_booster": ("CUSTOM_BOOSTER",),},
"required": {"vae": ("VAE",)},
"optional": {
"custom_booster": ("CUSTOM_BOOSTER",),
"inplace": (
"BOOLEAN",
{"default": True, "label_on": "yes", "label_off": "no"},
),
},
}

RETURN_TYPES = ("VAE",)
Expand All @@ -102,45 +113,32 @@ def speedup(self, vae, inplace=False, custom_booster: BoosterScheduler = None):
return super().speedup(vae, inplace, custom_booster)


class ControlnetSpeedup:
class ControlnetSpeedup(SpeedupMixin):
@classmethod
def INPUT_TYPES(s):
return {
"required": {},
"required": {"control_net": ("CONTROL_NET",),},
"optional": {
"control_net": ("CONTROL_NET",),
"cnet_stack": ("CONTROL_NET_STACK",),
"inplace": (
"BOOLEAN",
{"default": True, "label_on": "yes", "label_off": "no"},
),
"custom_booster": ("CUSTOM_BOOSTER",),
},
}

RETURN_TYPES = (
"CONTROL_NET",
"CONTROL_NET_STACK",
)
RETURN_TYPES = ("CONTROL_NET",)
FUNCTION = "speedup"
CATEGORY = "OneDiff"

@torch.no_grad()
def speedup(
self, control_net=None, cnet_stack=[], custom_booster: BoosterScheduler = None
self,
control_net=None,
inplace=True,
custom_booster: BoosterScheduler = None,
**kwargs
):
if custom_booster:
booster = custom_booster
else:
booster = BoosterScheduler(BasicBoosterExecutor(), inplace=True)

if control_net:
control_net = booster(control_net)

new_cnet_stack = []
for cnet in cnet_stack:
new_cnet = tuple([booster(cnet[0])] + list(cnet[1:]))
new_cnet_stack.append(new_cnet)
return (
control_net,
new_cnet_stack,
)
return super().speedup(control_net, inplace, custom_booster)


class OneDiffApplyModelBooster:
Expand Down Expand Up @@ -191,7 +189,7 @@ def INPUT_TYPES(s):
CATEGORY = "OneDiff/Loaders"
FUNCTION = "onediff_load_controlnet"

@torch.no_grad()
@torch.inference_mode()
def onediff_load_controlnet(self, control_net_name, custom_booster=None):
controlnet = super().load_controlnet(control_net_name)[0]
if custom_booster is None:
Expand Down
1 change: 0 additions & 1 deletion onediff_comfy_nodes/benchmarks/resources/prompts.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ breathtaking night street of Tokyo, neon lights. award-winning, professional, hi
concept art {prompt}. digital artwork, illustrative, painterly, matte painting, highly detailed
16-bit pixel art, a cozy cafe side view, a beautiful day
Byzantine Mosaic Art of a single purple flower in a withe vase.
A cute cat with a sign saying "Go Big or Go Home".
a cute cat with a sign saying "Go Big or Go Home"
A landscape photo of Iceland, with aurora, snow, ice and erupting lava
Fauvist Depiction of a Sunlit Village with Simplified Forms and Intense Color Contrasts.
Expand Down
Loading