You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, thanks again for the great effort. Have you tried baking the texture in image space in diffusion steps? Why do you choose to bake the texture directly in latent space, rather than decode to image space, bake then encode back? It is more intuitive to me, like computing LPIPS loss in image space in SnycDiffusion.
Also, I have tried to texture in latent space while getting fuzzy results directly, could you help me analyze the difference?
Direct baking
The text was updated successfully, but these errors were encountered:
Could you explain in details about your experiment of "directly texture in latent space"? Do you mean encoding a RGB texture to latent space, apply it to the surface of the object, render a latent camera view, then decode the camera view to RGB?
hi, thanks again for the great effort. Have you tried baking the texture in image space in diffusion steps? Why do you choose to bake the texture directly in latent space, rather than decode to image space, bake then encode back? It is more intuitive to me, like computing LPIPS loss in image space in SnycDiffusion.
Also, I have tried to texture in latent space while getting fuzzy results directly, could you help me analyze the difference?
Direct baking
The text was updated successfully, but these errors were encountered: