Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preparation NYUdepthv2 #36

Closed
seb-le opened this issue Jul 14, 2022 · 7 comments
Closed

Preparation NYUdepthv2 #36

seb-le opened this issue Jul 14, 2022 · 7 comments

Comments

@seb-le
Copy link

seb-le commented Jul 14, 2022

Hi. First of all, thank you for sharing your nice work.

I faced an issue when I organize the nyudepthv2 dataset.

As described in dataset_prepare.md, I tried to run follows:

$ git clone https://github.com/cleinc/bts.git
$ cd bts
$ python utils/download_from_gdrive.py 1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP sync.zip
$ unzip sync.zip

Next, what file should I download in link to get the standard test set that you mentioned?

image

Also, where can I get nyu_train.txt and nyu_test.txt?

Thanks.

@zhyever
Copy link
Owner

zhyever commented Jul 14, 2022

Thanks for your attention to my work. Please check the splits folder in Monocular-Depth-Estimation-Toolbox. I provide the split files there, following previous depth estimation work.

@zhyever
Copy link
Owner

zhyever commented Jul 14, 2022

I remember the test set has already been included in the sync.zip.

@seb-le
Copy link
Author

seb-le commented Jul 15, 2022

Thank you for your reply, and I found the splits txt files.

However, I think the test set is not included in sync.zip; instead, it is included in link and extracted by the BTS source code as follows:

$ cd ~/workspace/bts/utils
### Get official NYU Depth V2 split file
$ wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
### Convert mat file to image files
$ python extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ../../dataset/nyu_depth_v2/official_splits/

Then, we might cut the test set and paste it into our dataset structure.

Meanwhile, why the number of training pairs of nyudepthv2 is written as 50k in this repo and papers?

In nyu_train.txt, there are only 24,225 pairs for training.

Is nyu_train.txt correct?

Thanks,

@zhyever
Copy link
Owner

zhyever commented Jul 15, 2022

Thanks a lot for your report. That's a typo in the paper. We use this provided split file to train our models following previous work. I will zip my nyu dataset and upload it to drive in the future so that there will be no more effort to download or process the dataset using codes from other repos.

@seb-le
Copy link
Author

seb-le commented Jul 19, 2022

Thank you for your comments.

I have really learned a lot of things from your work and paper.

Thanks.

@zhyever
Copy link
Owner

zhyever commented Jul 20, 2022

It's nice to hear that. I hope they are helpful to you. Feel free to re-open this issue for discussions.

@zhyever zhyever closed this as completed Jul 20, 2022
@vuba-blog
Copy link

It's nice to hear that. I hope they are helpful to you. Feel free to re-open this issue for discussions.

I am sorry, but is the test dataset available now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants