Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fitting code #7

Open
amundra15 opened this issue Jan 11, 2023 · 5 comments
Open

Fitting code #7

amundra15 opened this issue Jan 11, 2023 · 5 comments

Comments

@amundra15
Copy link

Hi,

Thanks for the amazing work. I want to try NIMBLE for my project. Do you have the optimization code for getting the NIMBLE parameters using the ground-truth mesh and texture map?

Also, can you provide the UV coordinates for your mesh? Right now I am using the MANO UV coordinates to generate my texture map, but that is not aligned with the NIMBLE-generated maps.

Best,
Akshay

@reyuwei
Copy link
Owner

reyuwei commented Feb 4, 2023

Hi
I no longer have the optimization codes. But the structure should be similar to nr_reg here, just change the optimization variable to NIMBLE parameters, and use per-vertex distance as loss.

The UV coordinates is embedded in the model file. You can refer to this function to see how I export the geometry with uv coordinates.

Thanks!

@amundra15
Copy link
Author

Hey
As a follow-up, do you have the RGB-based fitting code? I have multi-view images, and I want to use one/multiple of them to fit NIMBLE. It would be great if you could provide the optimization code you used in the paper, as re-implementing it from scratch could lead to sub-optimal performance.

@reyuwei
Copy link
Owner

reyuwei commented Mar 2, 2023

The RGB-based result in our paper is from a learning based method (I2L-MeshNet) not fitting. If you have one or multiple images, I suggest first get the joint positions and then regressing the parameters according to joint positions. The optimization code should be quite similar to nr-reg.

@amundra15
Copy link
Author

If I understand this correctly, this will only give you the geometry, right? How do you get the appearance?

@reyuwei
Copy link
Owner

reyuwei commented Mar 3, 2023

You can use the photometric loss described in HTML. The process would be setting appearance parameter as an variable for optimization, then for each image, use a differentiable renderer (such as Pytorch3D) to compute photometric loss. You can set a fixed lighting condition for all views.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants