-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can you reach 99.86% while the upper limit accuracy is 99.83% on LFW? #9
Comments
Hi @KaleidoZhouYN ,
|
@ScienceFans OK,so you didn't use MTCNN for face alignment.I know that MTCNN will cause a lot of noise aligned images,thanks for your reply. |
@KaleidoZhouYN |
@ScienceFans And add BatchNorm after each conv layer? |
@KaleidoZhouYN Yes. |
@ScienceFans all right...I call that softmax plus BN |
there are few questions I want to know:
1.
As I know there are 10 error pairs in all the 6000 pairs,so the upper limit should be 1-(10/6000)%=99.83%,and the accuracy in your paper is 99.86%
2.
Can you show you training dataset and network which make the softmax loss to get a extremly high accuary(99.75%) on LFW? Softmax loss is not a very strong constrained loss and the result should be 98+% while training by MS_celeb_1M.
3.
What's the difference between your COCO_Loss and the combination of ASoftmax_loss and L2-constrain_loss?
The text was updated successfully, but these errors were encountered: