-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low validation accuracy 71% for race estimation #7
Comments
I can confirm I also get very similar results. In order to avoid possibility of bugs I ran predict.py with the validation data set, and used excel to compare the results. I got 70.8%. Attaching the excel: |
I also have similar results. |
same result here. |
The result in Table 6 in arxiv (table 8 in wacv paper) was measured on the "external validation datasets". The paper explains how they were collected and evaluated in detail. We are not able to release these datasets because these are not under CC license. The pre-trained model is the one used in our experiments in the paper. Thanks. |
Also, some experiments (race classification) were based on 4 or 5 race categories (not 7) because the other datasets we compared (eg UTK, LFWA) don't have 7. |
what i get is age@1: 60.52, gender@1: 94.36, race@1: 72.04 |
When I use the pretrained model to predict race on the validation set, I get the following accuracy:
This is very different from the accuracy reported in the paper. On the held-out datasets you report 81% average in Table 6.
This 10% difference makes me think I'm doing something wrong, or that the held-out datasets are not comparable to the validation dataset.
Here is my code:
The text was updated successfully, but these errors were encountered: