-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving and Loading Refiner pipeline causes load error #708
Comments
hello,in your code, there are 3 important steps:
If you want test the from time import perf_counter
import torch
from diffusers import (
AutoPipelineForText2Image,
AutoPipelineForImage2Image,
AutoencoderTiny,
)
import oneflow as flow
#from onediff.infer_compiler import oneflow_compile
from onediff.schedulers import EulerDiscreteScheduler
#
from onediffx import compile_pipe, save_pipe, load_pipe
scheduler_refiner = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", subfolder="scheduler")
refiner = AutoPipelineForImage2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
use_safetensors=True,
torch_dtype=torch.float16,
scheduler=scheduler_refiner,
variant="fp16",
vae=vae,
).to("cuda")
base = compile_pipe(base)
refiner = compile_pipe(refiner)
load_pipe(refiner, "refiner_graph") # make sure the `refiner_graph` file exist
with flow.autocast("cuda"):
for generation in queue:
image = base(generation["prompt"], output_type="latent").images
refiner(generation["prompt"], image=image) |
The above code I shared was for testing only. It also errors out. In my actual code base, saving refiner is a separate process, and loading the saved graph is separate function |
I try to run the code above you gave, but it failed according to no value like The error message you show above is indeed a error caused by trying to load a graph using a compiled object. can you re-confirm it? or can you extract the POC code that I can run directly and then I can show you how to fix it. |
I'll update in few hours, when I get back to my machine |
Here is an reproducible example (tested now) file - graph_exporter.py from time import perf_counter
import torch
from diffusers import (
AutoPipelineForText2Image,
AutoPipelineForImage2Image,
AutoencoderTiny,
)
import oneflow as flow
from onediff.schedulers import EulerDiscreteScheduler
from onediffx import compile_pipe, save_pipe
from onediffx.lora import load_and_fuse_lora, unfuse_lora
queue = []
queue.extend(
[
{
"prompt": "3/4 shot, candid photograph of a beautiful 30 year old redhead woman with messy dark hair, peacefully sleeping in her bed, night, dark, light from window, dark shadows, masterpiece, uhd, moody",
"seed": 877866765,
}
]
)
vae = AutoencoderTiny.from_pretrained(
"madebyollin/taesdxl",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
scheduler_base = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
base = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
use_safetensors=True,
scheduler=scheduler_base,
torch_dtype=torch.float16,
variant="fp16",
vae=vae,
).to("cuda")
scheduler_refiner = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", subfolder="scheduler")
refiner = AutoPipelineForImage2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
use_safetensors=True,
torch_dtype=torch.float16,
scheduler=scheduler_refiner,
variant="fp16",
vae=vae,
).to("cuda")
base = compile_pipe(base)
refiner = compile_pipe(refiner)
with flow.autocast("cuda"):
for generation in queue:
image = base(generation["prompt"], output_type="latent").images
refiner(generation["prompt"], image=image)
save_pipe(base, "base_graph")
save_pipe(refiner, "refiner_graph") file - graph_loader.py from time import perf_counter
import torch
from diffusers import (
AutoPipelineForText2Image,
AutoPipelineForImage2Image,
AutoencoderTiny,
)
import oneflow as flow
from onediff.schedulers import EulerDiscreteScheduler
from onediffx import compile_pipe, load_pipe
queue = []
queue.extend(
[
{
"prompt": "3/4 shot, candid photograph of a beautiful 30 year old redhead woman with messy dark hair, peacefully sleeping in her bed, night, dark, light from window, dark shadows, masterpiece, uhd, moody",
"seed": 877866765,
}
]
)
vae = AutoencoderTiny.from_pretrained(
"madebyollin/taesdxl",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
scheduler_base = EulerDiscreteScheduler.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler"
)
base = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
use_safetensors=True,
scheduler=scheduler_base,
torch_dtype=torch.float16,
variant="fp16",
vae=vae,
).to("cuda")
scheduler_refiner = EulerDiscreteScheduler.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0", subfolder="scheduler"
)
refiner = AutoPipelineForImage2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
use_safetensors=True,
torch_dtype=torch.float16,
scheduler=scheduler_refiner,
variant="fp16",
vae=vae,
).to("cuda")
base = compile_pipe(base)
refiner = compile_pipe(refiner)
print("Loading pipes")
load_pipe(base, "base_graph")
print("Loaded base pipe")
load_pipe(refiner, "refiner_graph")
with flow.autocast("cuda"):
for generation in queue:
image = base(generation["prompt"], output_type="latent").images
refiner(generation["prompt"], image=image) File system - the compiled graphs Error when trying to load refiner graph EDIT: Added image of filesystem |
thanks, I have run your script separately, but didn't reproduce the erorr. what OneFlow I use is:
what OneDiff I use is:
could you try to check the version you use first. and if the error remains, you can update it to me. |
dose the issue remain in your current environment?
I try both of two scripts(save/load) you provided, both works well. |
Yeah, issues persist.. |
I have reproduced the issue. I will try to fix it. |
I have identifiedthe problem, but I have not yet figured out how to fix this bug in OneDiff. from time import perf_counter
import torch
from diffusers import (
AutoPipelineForText2Image,
AutoPipelineForImage2Image,
AutoencoderTiny,
)
import oneflow as flow
from onediff.schedulers import EulerDiscreteScheduler
from onediffx import compile_pipe, load_pipe
queue = []
queue.extend(
[
{
"prompt": "3/4 shot, candid photograph of a beautiful 30 year old redhead woman with messy dark hair, peacefully sleeping in her bed, night, dark, light from window, dark shadows, masterpiece, uhd, moody",
"seed": 877866765,
}
]
)
vae = AutoencoderTiny.from_pretrained(
"madebyollin/taesdxl",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
vae2 = AutoencoderTiny.from_pretrained(
"madebyollin/taesdxl",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
scheduler_base = EulerDiscreteScheduler.from_pretrained(
"/share_nfs/hf_models/stable-diffusion-xl-base-1.0", subfolder="scheduler"
)
base = AutoPipelineForText2Image.from_pretrained(
"/share_nfs/hf_models/stable-diffusion-xl-base-1.0",
use_safetensors=True,
scheduler=scheduler_base,
torch_dtype=torch.float16,
variant="fp16",
vae=vae,
).to("cuda")
scheduler_refiner = EulerDiscreteScheduler.from_pretrained(
"/share_nfs/hf_models/stable-diffusion-xl-refiner-1.0", subfolder="scheduler"
)
refiner = AutoPipelineForImage2Image.from_pretrained(
"/share_nfs/hf_models/stable-diffusion-xl-refiner-1.0",
use_safetensors=True,
torch_dtype=torch.float16,
scheduler=scheduler_refiner,
variant="fp16",
vae=vae2,
).to("cuda")
base = compile_pipe(base)
refiner = compile_pipe(refiner)
print("Loading pipes")
load_pipe(base, "base_graph")
print("Loaded base pipe")
load_pipe(refiner, "refiner_graph")
with flow.autocast("cuda"):
for generation in queue:
image = base(generation["prompt"], output_type="latent").images
refiner(generation["prompt"], image=image) The more general solution will be added to OneDiff and will be transparent to user |
Understood. This works for me now. You may close the issue if you want now : ) |
Describe the bug
Able to save Refiner graph, unable to load it
A clear and concise description of what the bug is.\
While trying to load the refiner compiled graph via onediffx, it errors out.
Your environment
OS
Ubuntu, Python 3.11
Run
python -m oneflow --doctor
and paste it here.How To Reproduce
Steps to reproduce the behavior(code or script):
Save refiner graph
The complete error message
The text was updated successfully, but these errors were encountered: