-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[Fix] Fix KNet IterativeDecodeHead bug in dev-1.x branch #2334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportBase: 83.64% // Head: 83.65% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## dev-1.x #2334 +/- ##
===========================================
+ Coverage 83.64% 83.65% +0.01%
===========================================
Files 141 141
Lines 7973 7974 +1
Branches 1193 1193
===========================================
+ Hits 6669 6671 +2
+ Misses 1115 1114 -1
Partials 189 189
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
…2334) * [Fix] Fix KNet IterativeDecodeHead bug in dev-1.x branch * add comment * delete data link
* add: support for BLIP generation. * add: support for editing synthetic images. * remove unnecessary comments. * add inits and run make fix-copies. * version change of diffusers. * fix: condition for loading the captioner. * default conditions_input_image to False. * guidance_amount -> cross_attention_guidance_amount * fix inputs to check_inputs() * fix: attribute. * fix: prepare_attention_mask() call. * debugging. * better placement of references. * remove torch.no_grad() decorations. * put torch.no_grad() context before the first denoising loop. * detach() latents before decoding them. * put deocding in a torch.no_grad() context. * add reconstructed image for debugging. * no_grad(0 * apply formatting. * address one-off suggestions from the draft PR. * back to torch.no_grad() and add more elaborate comments. * refactor prepare_unet() per Patrick's suggestions. * more elaborate description for . * formatting. * add docstrings to the methods specific to pix2pix zero. * suspecting a redundant noise prediction. * needed for gradient computation chain. * less hacks. * fix: attention mask handling within the processor. * remove attention reference map computation. * fix: cross attn args. * fix: prcoessor. * store attention maps. * fix: attention processor. * update docs and better treatment to xa args. * update the final noise computation call. * change xa args call. * remove xa args option from the pipeline. * add: docs. * first test. * fix: url call. * fix: argument call. * remove image conditioning for now. * 🚨 add: fast tests. * explicit placement of the xa attn weights. * add: slow tests 🐢 * fix: tests. * edited direction embedding should be on the same device as prompt_embeds. * debugging message. * debugging. * add pix2pix zero pipeline for a non-deterministic test. * debugging/ * remove debugging message. * make caption generation _ * address comments (part I). * address PR comments (part II) * fix: DDPM test assertion. * refactor doc. * address PR comments (part III). * fix: type annotation for the scheduler. * apply styling. * skip_mps and add note on embeddings in the docs.
[Fix] Fix KNet IterativeDecodeHead bug in dev-1.x branch
Fix: #2325
Motivation
In latest dev-1.x branch, it would raise error when training K-Net like below:

After adding

self.out_channels = self.num_classes
inIterativeDecodeHead
initialization function, this bug would be fixed: