Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is valid_paths.json #22

Closed
wzic opened this issue Mar 31, 2023 · 7 comments
Closed

Where is valid_paths.json #22

wzic opened this issue Mar 31, 2023 · 7 comments

Comments

@wzic
Copy link

wzic commented Mar 31, 2023

Hi! Happy to read about your excellent work!

May I know where is the file valid_paths.json that is used for training on objaverse? I can only find object-paths.json in the downloaded files.

Thanks.

@ruoshiliu
Copy link
Member

Hi @wzic , a few hundreds (or thousands) of our downloaded files are corrupted or didn't render correctly, so I created a valid_paths.json to store the path to valid rendering folders.

@wzic
Copy link
Author

wzic commented Mar 31, 2023

Then what's the content in valid_paths.json? Is it a list of the ids of the objects? How about the format?

@ojmichel
Copy link

@wzic From what I can tell it is just a list of the valid object uids in json

@buttomnutstoast
Copy link

@ruoshiliu , it'd be very helpful if you can release the valid_paths.json file, otherwise, we are not able to know which objects are used for training/validation.

@ruoshiliu
Copy link
Member

ruoshiliu commented Apr 3, 2023

valid_paths.json.zip
There you go! The validation is the last 1 percent of the rows (first 99 percent used for training).

P.S. if you want to compare against our model on objaverse dataset, please use 105000.ckpt instead of 165000.ckpt as the latter might have been trained on some part of the validation sets unintentionally. As our paper focuses on zero-shot generalization, it didn't really matter but it does when you run in-distribution experiments.

@VitorGuizilini-TRI
Copy link

Hi, can you explain in more details how the evaluation is performed? More specifically, how do you choose which image you use as context, and which ones you use for novel view synthesis?

@ruoshiliu
Copy link
Member

For GSO and RTMV, we render a bunch of views whose camera poses are randomly sampled. We use the first view as input and the following views for evaluation. Since all views are uniformly sampled, the order doesn't make a difference. Same applies for Objaverse but since our paper focuses on zero-shot performance, we did not run evaluation on objaverse which is our training dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants