Repository to develop the project to confirm that a single channel recording of the echoes generated by a spiked wave, convey enough information as to map the 3D disposition (or at least the depth information) of the elements in a closed scene.
For this, a graphical user interface is created to calibrate a stereo pair and generate live depth maps, together with a graphical utility to create sound pulse emission/recording and ground truth depth map pairs. Finally, some neural networks are successfully trained to transfer the potential of stereo images to generate depth maps into single channel sound echo recordings.
The full report of the project can be found at:
https://github.com/Oiangu9/Reconstructing_Space_with_Time/blob/main/REPORT_Project.pdf