Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different betweem with the offical dataset Sci-Art and mine #69

Open
chufall opened this issue Jul 10, 2023 · 4 comments
Open

Different betweem with the offical dataset Sci-Art and mine #69

chufall opened this issue Jul 10, 2023 · 4 comments

Comments

@chufall
Copy link

chufall commented Jul 10, 2023

Hi , I still confused with building my own dataset following the method in Readme and discussions in #6 .

In order to verfy whether i'm right, I built the dataset with Sci-Art photos and compared with the same set provided in https://storage.cmusatyalab.org/mega-nerf-data/sci-art-pixsfm.tgz

If I'm right , the visualization result should be the same.
Here are the results.
1. official

74d1bb021ca01c2bedb4248067a4221

2. mine
9d8f686c363f7800a9215b9c844cf94

You will find that the cameras in official set are aligned the X axis(red) , but mines are aligned the Z axis(blue).
So I think there is a step before using the convert script, or there're anything wrong using the pixsfm。
How to do it?
Thank you very much!

Sincerely.
QC

@A-pril
Copy link

A-pril commented Aug 16, 2023

hello, I'm also trying to eval sci-art dataset, but there are 3 authors' datasets on https://vcc.tech/UrbanScene3D and each has 3 zips. So which one is the final choice? Could you share your choice?
Thanks a lot!

@chufall
Copy link
Author

chufall commented Aug 29, 2023

hello, I'm also trying to eval sci-art dataset, but there are 3 authors' datasets on https://vcc.tech/UrbanScene3D and each has 3 zips. So which one is the final choice? Could you share your choice? Thanks a lot!

https://github.com/Linxius/UrbanScene3D#urbanscene3d-v1
hi, use this link to download

@lhc991025
Copy link

Hi , I still confused with building my own dataset following the method in Readme and discussions in #6 .

In order to verfy whether i'm right, I built the dataset with Sci-Art photos and compared with the same set provided in https://storage.cmusatyalab.org/mega-nerf-data/sci-art-pixsfm.tgz

If I'm right , the visualization result should be the same. Here are the results. 1. official

74d1bb021ca01c2bedb4248067a4221

2. mine 9d8f686c363f7800a9215b9c844cf94

You will find that the cameras in official set are aligned the X axis(red) , but mines are aligned the Z axis(blue). So I think there is a step before using the convert script, or there're anything wrong using the pixsfm。 How to do it? Thank you very much!

Sincerely. QC

Hello, have you successfully customized the dataset and trained it. Can you provide your method

@AI-slam
Copy link

AI-slam commented Dec 19, 2023

@lhc991025 I also need to customized the dataset and trained it, maybe we can talk about it. my weixin is d19965161832

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants