Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera calibration parameters #6

Open
danasko opened this issue Nov 21, 2019 · 8 comments
Open

Camera calibration parameters #6

danasko opened this issue Nov 21, 2019 · 8 comments

Comments

@danasko
Copy link

danasko commented Nov 21, 2019

Hi,
I'm wondering if you are able to provide your kinect v2 calibration parameters, since I'd like to produce point clouds from the dataset depth images.
Thank you.

@ECHO960
Copy link
Owner

ECHO960 commented Nov 21, 2019

hi, you can check this webpage for visualization: https://struct002.github.io/PKUMMD/

@danasko
Copy link
Author

danasko commented Nov 21, 2019

I'm sorry, but I did not find the calibration parameters, nor visualization code on the mentioned webpage. Could you specify what you meant by visualization?

@ECHO960
Copy link
Owner

ECHO960 commented Nov 21, 2019

Hi, we include a camera matrix M in FAQ parts, but I am not sure where to get a matrix for each camera.

I am double-checking this with my team since I already graduated. Thanks

@ECHO960
Copy link
Owner

ECHO960 commented Nov 21, 2019

Seems that we have 3 different sets of skeleton joints and depth/RGB images for each camera. Using M can map the skeleton joints to image coordinates, so M can be used for all three camera. Please note that we don't have real-world 3D coordinates for skeleton joints.

@danasko
Copy link
Author

danasko commented Nov 21, 2019

So I guess, since you don't have the intrinsic (focal length and principal point) and extrinsic (rotation matrix and translation vector) parameters of your camera, I won't be able to get point clouds in real-world coordinates from your depth data.
Anyways, thank you for info.

@ECHO960
Copy link
Owner

ECHO960 commented Nov 21, 2019

Well, the M I mentioned before is the camera intrinsic matrix. As for the extrinsic matrix, it is hard to obtain that as we slightly changed the camera position during capturing (for different sequences). So if you want to project the 3D points cloud, for the first view you can simply use M[I|0], but you need to estimate the R and t for the second and third view, sorry.

@ECHO960
Copy link
Owner

ECHO960 commented Nov 21, 2019

wait, that intrinsic matrix is from RGB camera, let me see if I can get that from Depth camera

@Hunger-Prevails
Copy link

Hunger-Prevails commented Mar 5, 2021

wait, that intrinsic matrix is from RGB camera, let me see if I can get that from Depth camera

Hi, the M you provided on https://struct002.github.io/PKUMMD/ was for the RGB cameras. Since I'm trying to project the camera-space 3D coordinates on to the respective depth images, I would rather need the intrinsic matrix for the depth cameras. I wonder how I can get them?

Thanks for maintaining this dataset btw, it's hugely helpful for my thesis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants