-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How did you configure your test set? #30
Comments
Thanks. We follow the common configurations for those without an official test split: select 1 frame from every 8 frames. For BungeeNeRF, we choose the first 30 frames as test set. Details in Scaffold-GS/scene/dataset_readers.py Lines 165 to 178 in da97ef8
|
Hi, I have a followup question for this: I see that the appearance embedding is constructed based on the number of views in train cameras, and when shifting to eval mode, the uid of the test camera is directly used to query the learned embedding. If I understand the appearance embedding correctly, it is set up so that the view-dependent effect can be better encoded; but since the test cameras and train cameras have different views, and so their uids have different meaning in this aspect, I think querying the same learned embedding would lead to wrong effect ? Thanks |
Thanks for a great paper.
I was wondering, how did you evaluate the numbers like PSNR, SSIM, etc. in your paper?
My question is how many test cases did you pull out of the total number of datasets to evaluate the numbers.
I ask because it doesn't seem to be directly mentioned in the paper.
The text was updated successfully, but these errors were encountered: