You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am trying to find a way to compute the CLIP directional similarity on my original vs edited samples, but I do not have captions for my samples. I thought of doing a similar thing to the paper and using an LLM to generate edit captions from an input caption and an editing instruction. Any advice on that? I was wondering if the GPT3 instance you fine-tuned for the generation of your dataset captions is openly available. Thanks!
The text was updated successfully, but these errors were encountered:
Hello, I am trying to find a way to compute the CLIP directional similarity on my original vs edited samples, but I do not have captions for my samples. I thought of doing a similar thing to the paper and using an LLM to generate edit captions from an input caption and an editing instruction. Any advice on that? I was wondering if the GPT3 instance you fine-tuned for the generation of your dataset captions is openly available. Thanks!
The text was updated successfully, but these errors were encountered: