Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VQ-Diffusion #319

Closed
2 tasks done
patrickvonplaten opened this issue Sep 1, 2022 · 20 comments
Closed
2 tasks done

VQ-Diffusion #319

patrickvonplaten opened this issue Sep 1, 2022 · 20 comments

Comments

@patrickvonplaten
Copy link
Contributor

Model/Pipeline/Scheduler description

VQ-Diffusion is based on a VQ-VAE whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). It produces significantly better text-to-image generation results when compared with Autoregressive models with similar numbers of parameters. Compared with previous GAN-based methods, VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin.

https://github.com/microsoft/VQ-Diffusion

Open source status

  • The model implementation is available
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

VQ-Diffusion would be a super cool addition to diffusers. cc @cientgu and @zzctan .

Also cc @patil-suraj here

@unography
Copy link
Contributor

Hi @patrickvonplaten, would love to take this up!

@patrickvonplaten
Copy link
Contributor Author

This would be great! Let me know if you need any help :-) To begin with I think we should try to get it running with the original codebase and then port the code to diffusers

@patil-suraj
Copy link
Contributor

Hey @unography awesome! Happy to help here if you have any questions.

@patrickvonplaten
Copy link
Contributor Author

Any progress here @unography ? Do you already have an open PR :-) Otherwise let's maybe open it again to the community

@345ishaan
Copy link

Hi, I will be happy to contribute / collaborate on this :)

@unography
Copy link
Contributor

Hi @patrickvonplaten, unfortunately, I've been unable to spend time on this right now due to some other commitments, we can open this up again to the community

@patrickvonplaten
Copy link
Contributor Author

No worries! @345ishaan would you be interested in giving it a go?

@345ishaan
Copy link

@patrickvonplaten Yes, happy to start with this. Do you have any documentation / suggestions / reference CLs on how to quickstart?

@345ishaan
Copy link

Update: I was getting familiarized with the paper and author's code. I also checked how other models are integrated into diffuser's pipeline for inference only mode, so plan to do the same for VQ-Diffusion as next step using original code impln.

@pcuenca
Copy link
Member

pcuenca commented Sep 27, 2022

That's awesome, @345ishaan! Let us know if you need any help :)

@williamberman
Copy link
Contributor

Hello, super sorry wasn't aware someone was already working on this! I ported the VQVAE for the ITHQ dataset. Would love to help contribute if possible :)

I put up a draft PR #658 for the VQVAE with docs on how to compare it against VQ-diffusion. Is the standard to wait until the whole pipeline is complete before merging anything, or is it ok to incrementally merge functionality? I.e. for VQ-diffusion, it might be easier to get the individual autoencoders to work one at a time in their own commits before moving on to the rest of the model.

Any advice is appreciated, thanks!

@345ishaan
Copy link

Hmm ok, if you have crossed the finish line, then go ahead! I was mostly working on adding the implentation to diffusers in inference mode. If you need any help further, happy to collaborate.

Going forward, what is the best way to avoid such overlaps? I thought it was via proposing/updating through issues.

@williamberman
Copy link
Contributor

@345ishaan definitely not over the finish line, just ported the autoencoder for one of the models! Happy to collaborate :)

@345ishaan
Copy link

SG! I will check your CL. Do you want to chat over discord?

@williamberman
Copy link
Contributor

@cientgu @zzctan

Could I have some help parsing q_posterior?

https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/image_synthesis/modeling/transformers/diffusion_transformer.py#L235-L267

I believe it's computing equation 11 in log space, but I still have a few questions. I understand it's adapted from https://github.com/ehoogeboom/multinomial_diffusion/blob/9d907a60536ad793efd6d2a6067b3c3d6ba9fce7/diffusion_utils/diffusion_multinomial.py#L171-L193 which provides the initial derivation that makes sense.

        # q(xt-1 | xt, x0) = q(xt | xt-1, x0) * q(xt-1 | x0) / q(xt | x0)
        # where q(xt | xt-1, x0) = q(xt | xt-1).

However, the later comment is a bit vague :)

        # Note: _NOT_ x_tmin1, which is how the formula is typically used!!!
        # Not very easy to see why this is true. But it is :)
        unnormed_logprobs = log_EV_qxtmin_x0 + self.q_pred_one_timestep(log_x_t, t)

Because it seems like the actual equation it's using is q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0).

Additional questions,

  1. Some context on how you're handling masks in q_posterior would be helpful
  2. What is the summation over in equation 11 and how does it map to q_posterior?
  3. I don't see an analog for the lines starting from 262 onward in multinomial diffusion, could you provide some additional context there as well?

Lmk if any of that wasn't clear, thank you!

@345ishaan
Copy link

@williamberman I will be able to take some tasks today and tomorrow. I just checked your CL, it seems like you ported the vq-vae encoder there. Do you want to chat over discord to split tasks? My username is 345ishaan#9676

@williamberman
Copy link
Contributor

pinged you in discord @345ishaan!

@williamberman williamberman mentioned this issue Oct 12, 2022
12 tasks
@Zeqiang-Lai
Copy link

@cientgu @zzctan

Could I have some help parsing q_posterior?

https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/image_synthesis/modeling/transformers/diffusion_transformer.py#L235-L267

I believe it's computing equation 11 in log space, but I still have a few questions. I understand it's adapted from https://github.com/ehoogeboom/multinomial_diffusion/blob/9d907a60536ad793efd6d2a6067b3c3d6ba9fce7/diffusion_utils/diffusion_multinomial.py#L171-L193 which provides the initial derivation that makes sense.

        # q(xt-1 | xt, x0) = q(xt | xt-1, x0) * q(xt-1 | x0) / q(xt | x0)
        # where q(xt | xt-1, x0) = q(xt | xt-1).

However, the later comment is a bit vague :)

        # Note: _NOT_ x_tmin1, which is how the formula is typically used!!!
        # Not very easy to see why this is true. But it is :)
        unnormed_logprobs = log_EV_qxtmin_x0 + self.q_pred_one_timestep(log_x_t, t)

Because it seems like the actual equation it's using is q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0).

Additional questions,

  1. Some context on how you're handling masks in q_posterior would be helpful
  2. What is the summation over in equation 11 and how does it map to q_posterior?
  3. I don't see an analog for the lines starting from 262 onward in multinomial diffusion, could you provide some additional context there as well?

Lmk if any of that wasn't clear, thank you!

Have you figure out questions here ? I am also confused about that the actual computation seems to be q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0)

@williamberman
Copy link
Contributor

Hey @Zeqiang-Lai I did actually figure out what was going on here!

This class is heavily commented

# p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0)

I reverse engineered it through trial and error and a lot of whiteboard markers!

I don't remember all of it exactly, but the main components are doing the calculation in log space for numerical stability and avoiding a logspace matmul for which there's no memory efficient pytorch kernel. A few of the other components are just cheeky linear algebra.

I later discovered that there's an explanation in the appendix of the multinomial diffusion paper. I didn't read it exhaustively but from skimming, it looks like it's on similar material. https://arxiv.org/pdf/2102.05379.pdf

image

@williamberman
Copy link
Contributor

@Zeqiang-Lai if you have any other questions on the math, feel free to shoot me an email wlbberman@gmail.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants