Skip to content

Commit

Permalink
less bugs,
Browse files Browse the repository at this point in the history
a fix to large prompts where we got an issue when going for speed. there we run into issues because of different tensorsizes with torch.cat. maybe someday I find how to fix that, for now we test the size and if different we fall back to slow but safe
  • Loading branch information
Werner Oswald committed Feb 27, 2023
1 parent e306d86 commit 4517636
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions backend/deforum/six/model_wrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -155,14 +155,17 @@ def _cfg_model(x, sigma, cond, **kwargs):
# No conditioning
else:
# calculate cond and uncond simultaneously
if self.cond_uncond_sync:
# dows only work with prompts of the same size,
# so we check and if the size is different we go for the slower variant
c_size = cond.size()
uc_size = uncond.size()
if self.cond_uncond_sync and c_size == uc_size:
cond_in = torch.cat([uncond, cond])
x0 = _cfg_model(x, sigma, cond=cond_in)
else:
uncond = self.inner_model(x, sigma, cond=uncond)
cond = self.inner_model(x, sigma, cond=cond)
x0 = uncond + (cond - uncond) * cond_scale

return x0

def make_cond_fn(self, loss_fn, scale):
Expand Down

0 comments on commit 4517636

Please sign in to comment.