-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dreambooth + AnimateDiff + ControlNet #3
Comments
Hi, we do not finetune DreamBooth/LoRA on the input video frames. We simply replace the Stable Diffusion with the personalized model downloaded from CivitAi for different styles. |
After you published your paper, I conducted experiments involving fine-tuning a U-Net in conjunction with a motion module, and it yielded promising results for style transfer. Prior to this, I explored LoRA, but the outcomes were less than impressive. I divide the samples into segments of video_length (16 frames) and then train the U-Net with the motion module on these segments as a single batch. I have observed that the motion module, equipped with a single transformer block, possesses enough capability to capture simple motions. However, it does face limitations in memorizing shapes due to the absence of convolutional layers. Fine-tuning the U-Net, on the other hand, empowers it to retain both textures and shapes. |
你好,我想问一下 MagicEdit 预计什么时候能上线使用呢 |
MagicEdit 哪个时候可以使用到呢? |
As I understand you making following steps:
Can you share hyperparameters of finetuning?
The text was updated successfully, but these errors were encountered: