You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to perform zero-shot depth estimation using intel-midas on panoramic 360deg images (stitched from multiple 2d images). Surprisingly, it does NOT work for panoramic images but works for individual images.
Can I use your method to train on my 360deg panoromic data in self-supervsed way (without any labels) using only the raw images?
Thanks for this amazing project!
The text was updated successfully, but these errors were encountered:
Sorry for later reply, I haven't tried this dataset. According to my experience, self-supervised monocular depth estimation is more suitable for autonomous driving scene, in which the camera only have 3 DOF ( X, Y, YAW).
Hi,
I tried to perform zero-shot depth estimation using intel-midas on panoramic 360deg images (stitched from multiple 2d images). Surprisingly, it does NOT work for panoramic images but works for individual images.
Can I use your method to train on my 360deg panoromic data in self-supervsed way (without any labels) using only the raw images?
Thanks for this amazing project!
The text was updated successfully, but these errors were encountered: