You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Your work is really exciting. However, I have a question that I hope you can answer.
Regarding the calculation methods of FID (Frechet Inception Distance) and KID (Kernel Inception Distance), we know that the generated images are compared with the real images. So in the evaluation of this paper, where do the real images come from? For example, I have generated a texture image of a backpack through text. Then where should I obtain the real image for comparison? Is it from the Objaverse dataset?
The text was updated successfully, but these errors were encountered:
Yes, the ground truth (GT) distribution comes from the Objaverse dataset.
Thank you for your reply. I read in the paper that you have performed UV unwrapping on the 3D objects in the dataset to obtain their texture maps. I noticed that the format of 3D objects on Hugging Face is all GLB. So I would like to ask which software or scripts you used to carry out the UV unwrapping and obtain their texture maps. Thank you.
是的,真实 (GT) 分布来自 Objaverse 数据集。
Hello, is there any plan to upload the subset of the Objaverse dataset used for validating the model as shown in the paper? Thank
Hello! Your work is really exciting. However, I have a question that I hope you can answer.
Regarding the calculation methods of FID (Frechet Inception Distance) and KID (Kernel Inception Distance), we know that the generated images are compared with the real images. So in the evaluation of this paper, where do the real images come from? For example, I have generated a texture image of a backpack through text. Then where should I obtain the real image for comparison? Is it from the Objaverse dataset?
The text was updated successfully, but these errors were encountered: