Skip to content

Does RePaintPipeline work with Stable diffusion? #1213

Closed
@FBehrad

Description

@FBehrad

Hello,
First of all, thank you so much for adding RePaintPipeline. This pipeline works much better than stable diffusion inpainting when I use DDPMs (such as ddpm-ema-bedroom-256 and ddpm-bedroom-256) . However, when I use CompVis/stable-diffusion-v1-4, some bugs appear. For example, I get the following error:

TypeError: set_timesteps() takes from 2 to 3 positional arguments but 5 were given

I wonder is it possible to use stable diffusion as its generator?
Here, is my code:

scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)

pipe = RePaintPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler) 
pipe = pipe.to("cuda")

generator = torch.Generator(device="cuda").manual_seed(0)
output = pipe(
    original_image=img,
    mask_image=msk,
    num_inference_steps=250,
    eta=0.0,
    jump_length=10,  
    jump_n_sample=10,
    generator=generator,
)
inpainted_image = output.images[0]

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions