Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to make custom pointcloud data as inference input? #1

Open
YiChenCityU opened this issue Jun 6, 2023 · 15 comments
Open

How to make custom pointcloud data as inference input? #1

YiChenCityU opened this issue Jun 6, 2023 · 15 comments

Comments

@YiChenCityU
Copy link

Hi, Congratulations. I want to test the inference code with the pointcloud as input. Could you provide some advices? Thanks very much.

@SimonGiebenhain
Copy link
Owner

Hi @YiChenCityU,
thanks for your interest.
For the "dummy_dataset" as well as our proposed test set we provide single view data already.

If you want to change some properties of the input for inference you can play araound with scripts.data_processing.generate_single_view_observations.
I used this script to generate the input. Per default it tries to do so for every subject in the test set. But you can simply specifiy what subject and expression you are interested in.

@YiChenCityU
Copy link
Author

Thanks very much. What if I only have a point cloud captured from iphone, do I have to provide the expression of it?

@YiChenCityU
Copy link
Author

YiChenCityU commented Jun 7, 2023

Screenshot from 2023-06-07 14-34-13
Screenshot from 2023-06-07 14-35-35
Screenshot from 2023-06-07 14-39-14
Screenshot from 2023-06-07 14-48-04
This is the point cloud I used and the result was not similar to it. Do you have some suggestions? Ply files are below.
https://drive.google.com/file/d/1UYBbR-TkRtgSKJQbuNUnMu4dwdN1kx9a/view?usp=sharing https://drive.google.com/file/d/1A4EJbSUjuAfJ_k8FzmsimsSKBPi1QQ5k/view?usp=sharing

@SimonGiebenhain
Copy link
Owner

Hey, cool stuff.

The problem is very likely the coordinate system. NPHM only works if the input is in the expected coordinate system (FLAME coordinate system scaled by a factor of 4).

Therefore, you would first have to align the input point cloud with the FLAME coordinate system, e.g. a very simple approach would be a similarity transform from detected 3D landmarks to the landmarks of the FLAME template. Actually, you could also first fit FLAME and use the resulting Scale, Rotation, and Translation from the result. In that case, you can separate the head from the torso in the same way as in the preprocessing of NPHM. Having observations on the torso tends to confuse the inference optimization

Here is an example mesh from the dataset and one of the provided point clouds to show why the model fails:

Screenshot from 2023-06-07 12-19-07

@SimonGiebenhain
Copy link
Owner

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.

Screenshot from 2023-06-07 12-28-03

@YiChenCityU
Copy link
Author

I will try. Thanks very much.

@xvdp
Copy link

xvdp commented Jun 8, 2023

Ive been trying to unravel the description as well, I didnt get as far as yichen, It would be wonderful if you could provide a full test example ...
If you are concerned about identity, maybe take a pointcloud of a statue...

@nsarafianos
Copy link

Thank you so much @SimonGiebenhain for publishing the code and congrats for your great work!

Quick Q: I have a pointcloud in.obj format (lifted from a foreground RGB-D monocular image) that is transformed to be on the exact same space with FLAME as suggested above. How do you go about fitting NPHM to this particular pointcloud ?

I'm asking because the example provided uses existing identities (along with their expressions) from the dummy_data whereas I'm interested in preserving the identity of the pointcloud.

Thank you!

@Zvyozdo4ka
Copy link

@SimonGiebenhain
even with perfect alignment it did not resemble the identity

Original files are here.
https://drive.google.com/drive/folders/1cprPG_9AihL4HpYl0lOvZDz7kNbXv8kB?usp=sharing

image
image

@Zvyozdo4ka
Copy link

The problem is very likely the coordinate system. NPHM only works if the input is in the expected coordinate system (FLAME coordinate system scaled by a factor of 4).

How did you get FLAME models? What solution did you employ?

Therefore, you would first have to align the input point cloud with the FLAME coordinate system, e.g. a very simple approach would be a similarity transform from detected 3D landmarks to the landmarks of the FLAME template.

Do you have this code of alignment or did you use another method to align point cloud and flame?

Actually, you could also first fit FLAME and use the resulting Scale, Rotation, and Translation from the result. In that case, you can separate the head from the torso in the same way as in the preprocessing of NPHM. Having observations on the torso tends to confuse the inference optimization

Do you mean that fitting Flame to point cloud can give us the same NPHM output?

@Zvyozdo4ka
Copy link

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.

Screenshot from 2023-06-07 12-28-03

in your work, did you align point cloud and flame manually or using an alignment algorithm?

@mwb3262716541
Copy link

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.
Screenshot from 2023-06-07 12-28-03

in your work, did you align point cloud and flame manually or using an alignment algorithm?

Hi, I have seen many your questions and answers under this project. Now I meet the same problem on converting the point cloud scan to the input cloud, have you solved this problem?

@Zvyozdo4ka
Copy link

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.
Screenshot from 2023-06-07 12-28-03

in your work, did you align point cloud and flame manually or using an alignment algorithm?

Hi, I have seen many your questions and answers under this project. Now I meet the same problem on converting the point cloud scan to the input cloud, have you solved this problem?

i did not manage to resolve it, and you can see they've never responded me back
I tried manual alignment of photogrammetry or point avatar generated point clouds, but outputs comprises a lot of artifacts, and can't be used. From 20-30 experiment to manual align, only one sometimes can look well.
And they do not mention it directly in paper, but they used point cloud reconstructed from Kinect device, which i do not have, so i gave up.

You may consider trying MonoNPHM project from the same author.
But unfortunately my outputs did not resemble identity

@mwb3262716541
Copy link

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.
Screenshot from 2023-06-07 12-28-03

in your work, did you align point cloud and flame manually or using an alignment algorithm?

Hi, I have seen many your questions and answers under this project. Now I meet the same problem on converting the point cloud scan to the input cloud, have you solved this problem?

i did not manage to resolve it, and you can see they've never responded me back I tried manual alignment of photogrammetry or point avatar generated point clouds, but outputs comprises a lot of artifacts, and can't be used. From 20-30 experiment to manual align, only one sometimes can look well. And they do not mention it directly in paper, but they used point cloud reconstructed from Kinect device, which i do not have, so i gave up.

You may consider trying MonoNPHM project from the same author. But unfortunately my outputs did not resemble identity

thanks for your reply. It is a little difficult. I noticed DPHM project recently. They provide code to preprocess the kinect depth map and convert it to the input inference. I am trying to run it but still not success.
one more question, how do you manully align point cloud and flame?

@Zvyozdo4ka
Copy link

thanks for your reply. It is a little difficult. I noticed DPHM project recently. They provide code to preprocess the kinect depth map and convert it to the input inference. I am trying to run it but still not success. one more question, how do you manully align point cloud and flame?

just manually in Cloud Compare, rotated, scaled. Also you can try in Blender

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants