-
Notifications
You must be signed in to change notification settings - Fork 294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Following the instructions to train a character from scratch does not work well #18
Comments
Hi, to make sure whether is something wrong with NeRF, could you refer to |
I remember when I was training head in AD-NeRF, there was a Head rendering result for the head. I didn't render the corresponding avatar here。 |
Hi, it seems your Head NeRF is not trained well. If you use tensorboard to visualize the training curves, you may find that the loss of NeRF is stuck at a high value (such as 0.05). I have also encountered this problem when reproducing the result of AD-NeRF. Following this issue, actually, it's a problem of network initialization. They found that in few cases the torso model could not produce proper output due to bad initialization. And in rare cases (with a lower rate than TorsoNeRF) HeadNeRF suffers from bad initialization too. One solution is to run the command multiple times until good initialization (in my experience, the PNR is greater than 21 in 1000 iterations for a good initialization). Another solution is to use a pretrained Head/Torso model (e.g. torso model trained from another person). Based on my experience, I can get a good initialization with no more than 5 attempts. |
OK,let me try |
Yes, in your provided samples, it seems that the head nerf renders nothing, so the torso nerf learns to render the head and torso due to the mse loss with com_imgs. However, the torso nerf cannot model the facial part (it doesn't have 3D landmark as its input), so it tends to render a "mean" face. |
I see. So GeneFace is different from the previous method in that the generated head is just a condition fed to torsoNerf, and torsoNerf will generate the whole picture, right? So in this case, the head condition is empty, without previous generated landmark condition, so the torso-NeRF will learn to express body and head together and get the avg face. |
The head image generated by Head NeRF is used to generate the final results. The During the training of Torso NeRF, when the Head NeRF is trained successfully, the mse loss at the head part is already converged, so only the torso part is learned by the Torso NeRF. In your case, as the Head NeRF renders nothing, the Torso NeRF also needs to predict the head part. However, since the input space of Torso NeRF doesn't have 3D landmark, it cannot model the variance in facial motion. As a result, the head part predicted by the Torso NeRF is blurry. In summary, my suggestion is to run the |
Hi, we use vanilla NeRF in the current GeneFace version, whose performance is widely proven heavily rely on a good initialization. Exchanging the NeRF-based renderer with advanced NeRF techniques may help. Otherwise, we have to try several times until a good intialization is rolled. I'd like to make a forecast: our recent work named |
Hello, I trained a character from scratch according to the readme, but the generated result is not good, especially in the face is blurred and the organs are not clear. Any idea what I did wrong?
The text was updated successfully, but these errors were encountered: