You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have created a series of cameras for Rubble scene based the code of Nerf , but the result is not good .
Can you share the camera poses for render_images used by rubble scene?
Thanks a lot qc
You may generate the poses backward from metadata, that are contained in .pt files with filename corresponding to image filename.
After you download the Rubble dataset you can find for example rubble-pixsfm/train/metadata/000001.pt, then if you open it with torch.load you can read filed "c2w" but this pose is scaled so that the shole datset within [0;1]. So you have to scale this pose with parameter read from field "pose_scale_factor" from torch file rubble-pixsfm/coordinates.pt. Also you have to change frame for the poses, remember that poses in DRB frame as the author mentioned in Readme.md. Also look at https://github.com/cmusatyalab/mega-nerf/blob/main/scripts/colmap_to_mega_nerf.py#L408-L409
I'm not the developer or author of the repository so I don't know manipulation with poses, how are they were generated originally.
I have created a series of cameras for Rubble scene based the code of Nerf , but the result is not good .
Can you share the camera poses for render_images used by rubble scene?
Thanks a lot
qc
The text was updated successfully, but these errors were encountered: