first version of controlnet (canny) for stablediffusion 2.1 #235
Replies: 3 comments 3 replies
-
Does anyone know why when using control nets for SD 2.1 I get always rather "blurry" and like "low quality jpeg" outputs? My prompts are good I think, also using some most famous negative embeddings, but sometimes not. No matter what I do, I cannot get crisp output of SD 2.1 (using models trained even on 832x832). If I disable CN for that particullar settings, outputs are super clear, but with CN they tend to lose lot of that crispnes. Dont really know why. Has someone idea what might I be missing? Or is that just like that that those models destroy quality? Interestingly on same settings with 1.5 I dont have those issues and I get crisp results (obviously different models for 1.5). |
Beta Was this translation helpful? Give feedback.
-
SD1.5 and SD2.1 have different behaviors. |
Beta Was this translation helpful? Give feedback.
-
@thibaudart Could you please let me know, how did you train the controlnet model for SD 2.1? The official repo uses OpenAI's LDM. How did you migrate it to Stability AI's SD 2.1? |
Beta Was this translation helpful? Give feedback.
-
I just released Controlnet (canny) for SD 2.1:
https://twitter.com/thibaudz/status/1632877005866663937
https://huggingface.co/thibaud/controlnet-canny-sd21
Beta Was this translation helpful? Give feedback.
All reactions