Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

role of 'param loss' in pre-train stage #12

Open
yjsong-alchera opened this issue Apr 19, 2022 · 2 comments
Open

role of 'param loss' in pre-train stage #12

yjsong-alchera opened this issue Apr 19, 2022 · 2 comments

Comments

@yjsong-alchera
Copy link

Hello. I love your great work.

I have a question.

When I tried to pre-train 3DMM estimator, I found 'param loss' (the line 305 in model.py)

param_loss = 1e-3 * (torch.mean(codedict['shape'] ** 2) + 0.8 * torch.mean(codedict['exp'] ** 2))

But, I couldn't understand the role of this loss term...

Could you explain more about this loss term? (How param loss work)

For 3DMM parameters (Shape, Expression), how can we regulate this parameters although there are no GT (Ground Truth). (I understand 'ldmk_loss' because we prepare ldmk GT before training)

Please forgive me

Thank you.

@Qiulin-W
Copy link
Owner

Hi,
The ldmk loss is a re-projection loss. Therefore, for a given set of landmarks on the 2D images, there are countless solutions of 3DMM parameters (shape, exp, pose). This is where the param loss come in. The param loss is used to regulate the 3DMM so that the 3DMM parameters and the result 3D faces are plausible.

@yjsong-alchera
Copy link
Author

Thanks for quick replying :D

Then, mean of 'shape' parameters and mean of 'exp' parameters should be as small as possible, right?

That is, the role of 'param loss' is to make the mean of 'shape' parameters and mean of 'exp' parameters close to 0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants