You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.
According my observation the consistency decoder uses an UNet2D module in the VAE and does diffusion. That is where huge VRAM cost and latency from. According my test it costs ~16G VRAM when decoding the latent with target size 1024*1024.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.
diffusers
version: 0.23.0Reproduction
Decode a large latent
Logs
No response
System Info
..
Who can help?
No response
The text was updated successfully, but these errors were encountered: