Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to visualize with only RGB and Depth? #68

Open
snijders-tjm opened this issue Mar 9, 2021 · 4 comments
Open

How to visualize with only RGB and Depth? #68

snijders-tjm opened this issue Mar 9, 2021 · 4 comments

Comments

@snijders-tjm
Copy link

Hello,

Thank you for publishing your code, it is very interesting.

From your paper I read that PVN3D only needs RGB and Depth to pose estimate, now I want to test this with my own dataset using objects from either LineMOD or YCB (so that I use the pretrained model) with only RGB and depth pictures.

Is it possible to, for instance, alter the demo.py to work with only that information or does that require more adaptations? If so, how could I do that?

Thank you in advance!

@ethnhe
Copy link
Owner

ethnhe commented Mar 13, 2021

Yes, it's possible. You can modify the dataset preprocess scripts, datasets/ycb/ycb_dataset.py or datasets/linemod/lm_dataset.py to preprocess your own RGBD images and then fed them into the demo.py script to get the result. You also need to modify the intrinsix matrix to be the intrinsix matrix of your camera in these two scripts.

@hyg2sunshine
Copy link

@ethnhe I tried to change demo.py and ycb_dataset.py, but when I put only color.png and depth.png in the test image folder, an error would be reported, like the following:
Traceback (most recent call last):
File "demo_test.py", line 170, in
main()
File "demo_test.py", line 161, in main
enumerate(test_loader), leave=False, desc="val"
File "/home/hyg/.local/lib/python3.6/site-packages/tqdm/std.py", line 1178, in iter
for obj in iterable:
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 81, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>

Only when I add the corresponding label.png and meta.mat under the path can it be detected normally.
So is there anything else that needs to be modified? I hope you can help me answer, thank you very much!

@andreazuna89
Copy link

Hi all! were you able to run the demo script with your own RGBD data? How can we generate the meta data for our own data? Is there a way to use only RGBD data and label file as input? the label file can be easily generated but the meta data is not so easy.

Thanks

@ghost
Copy link

ghost commented Apr 21, 2022

Hey @snijders-tjm @hyg2sunshine, did you succeed ?
Would love to have a feedback on this !
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants