Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP? #188
Locked
lllyasviel
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In 2023, if we want to train an encoder to perform some tasks, we have four basic options as follows:
In our problem we want to control Stable Diffusion, and the encoder will be trained jointly with a big SD. Because of this, the option (3) requires super large computation power and is not practical unless you have as many A100s as EMostaque does. But we do not have that, so we may just forget about (3).
The option (1) and (2) are just similar and can be merged. They usually have similar performance.
Note that "fine-tuning existing deep encoders" and "training lightweight encoders from scratch" are both relatively preferred methods. Which one is "harder" or "easier" to train is a complicated question and even related to different training environments. We should not presume the learning behaviors by simply looking at the number of parameters.
But in this post, let's pay more attention to the qualitative differences of these methods if they are already trained successfully.
Candidates
Let us consider these architectures:
ControlNet-Self
Below is the model architecture that we released many days ago as a final solution. It directly uses the encoder of SD. Because it copies itself, let us call it "ControlNet-Self".
ControlNet-Lite
Below is a typical architecture to "train lightweight encoders from scratch". We just use some simple convolution layers to get some embedding and inject the SD unet. Because it has relatively fewer parameters, lets call it "ControlNet-Lite". Channels of layers are computed by instantiating the ldm python object.
ControlNet-MLP
Below is a more extreme case to just use a pixel-wise Multilayer Perceptron (MLP). In recent years, MLPs are suddenly popular again, and they are actually just 1×1 convolutions. We use AVG pool as down sampling and then let us call it "ControlNet-MLP". Channels of layers are computed by instantiating the ldm python object.
Here We Go!
This house image is just the first searching result when I search "house" in pinterest. Let us use it as an example:
And this is the synthesized scribble map after preprocessor (you can use our scribble code to get this)
Then let me show off a bit my prompt engineering skills. I want a house under the winter snow. I will use this prompt:
Prompt:
Professional high-quality wide-angle digital art of a house designed by frank lloyd wright. A delightful winter scene. photorealistic, epic fantasy, dramatic lighting, cinematic, extremely high detail, cinematic lighting, trending on artstation, cgsociety, realistic rendering of Unreal Engine 5, 8k, 4k, HQ, wallpaper
Negative Prompt:
longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality
The "ControlNet-Self" is just our final released ControlNet and you can actually reproduce the results with below parameters. Note that we will just use same random seed 123456 for all experiments and generate 16 images without cherry-picking.
ControlNet-Self Results
ControlNet-Lite Results
ControlNet-MLP Results
Surprise, Surprise
It seems that they all give good results! The only difference is in some aesthetic concepts.
But why? Is the problem of controlling Stable Diffusion so trivial, and everything can work very well?
Why not turn off the ControlNet and see what happens:
Ah, then the secret trick is clear!
Because my prompts are carefully prepared, even without any control, the standard Stable Diffusion can already generate similar images that have many "overlapping concepts/semantics/shapes" with the input scribble maps.
In this case, the ControlNet only need to influence the shape of generated images a little to "fit" the generated images with some specified shape.
In this case, it is true that every method can work very well.
In fact, in such an "easy" experimental setting, I believe Sketch-Guided Diffusion or even anisotropic filtering will also work very well to change the shape of objects and fit it to some user specified structure.
But what about some other cases?
The Non-Prompt Test
Here we must introduce the Non-Prompt Test (NPT), a test that can avoid the influence of the prompts and test the "pure" capability of ControlNet encoder.
NPT is simple - just remove all prompts (and put image conditions on the "c" side of cfg formulation "prd=uc+(c-uc)*cfg_scale" so that the cfg scale can still work). In our user interface, we call this "Guess Mode" because the model seems to guess contents from input control maps.
Because no prompt is available, the ControlNet encoder must recognize everything on its own. This is really challenging, and note that all our production-ready ControlNets have passed extensive NPT tests before we made them publicly available.
The "ControlNet-Self" is just our final released ControlNet and you can actually reproduce the results with below parameters. Note that we do not input any prompts.
ControlNet-Self Results
ControlNet-Lite Results
ControlNet-MLP Results
Observations
Now things are much clearer. The difference between different encoders is in their capability to recognize contents in input control maps.
ControlNet-Self has strong recognition capability so that it works well even without prompts.
ControlNet-Lite and ControlNet-MLP are weak in this capability, and they cannot control SD to generate meaningful images without the help of user prompts.
But, is this really important?
The answer depends on your goal.
If your goal is to build a method as robust as the production-ready ControlNets, this capability is important. In a production environment, we never know how strange user prompts will be, and user prompts are not likely to always cover everything in the control maps. We always want the encoder to have some recognition capability.
If your goal is to solve some specific problems in a research project, or if you have very aligned or fixed inputs, then perhaps you may consider some lightweight solutions (although I personally think the design of ControlNet-Self can also work well in this case).
But if you want to achieve a system with the quality similar to Style2Paints V5, then to the best of my knowledge, the ControlNet-Self is the only solution.
Before We End
Now we also know why we need these zero convolutions
Just imagine that these layers are initialized with noise, then a few training steps will immediately destroy the trainable copy, and the risk is very high that you are just training the already destroyed trainable copy from scratch again. To obtain the aforementioned object recognition capability would require extensive retraining — similar to the amount of training required to produce the Stable Diffusion model itself.
We also know why it is important that ControlNet encoder should also receive prompts:
With this part, the ControlNet encoder's object recognition can be guided by the prompts so that even when the prompts and the recognized control map semantics conflict, the user's prompt remains dominant.
For example, we already know that without prompts, the model can recognize the house in the house scribble map, but we can still make it into cakes:
"delicious cakes" using that house scribble map
Finally, note that this field is moving very fast and we won’t be surprised if some method suddenly comes out with just a few parameters and can also recognize objects equally well.
Edit 2023 Feb 27: This post is archived for reference only. Feel free to start new discussions for ideas of designing neural networks.
Beta Was this translation helpful? Give feedback.
All reactions