You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm interested in the training configuration, or seeing the training code.
How many epochs did you train?
What batch size did you use?
Was the Adam learning rate fixed at 0.0001 or did you use a learning rate schedule?
Did you use any augmentation during training?
Did you weight the different losses equally (race, gender, age)?
Did you weight the loss in different classes to account for class imbalance?
Did you train the final dense layers with dropout, or with the default torchvision resnet34 architecture?
Did you unfreeze/make trainable all the resnet34 layers from the beginning, or only train the dense layer first? Did you unfreeze the resnet in blocks or all at once?
The text was updated successfully, but these errors were encountered:
The above list is very comprehensive and the correct one that is mandatory for training.
Regarding the presented labels one would add the previously raised question re the meaning of the "service_test" field.
The info in the paper about training is simply not enough to reconstruct what the authors did in terms of training their models.
Interestingly, the authors seem to have kind of a selective hearing: they do not respond to this question (and similar ones raised in the same context), while they are active (after this one was raised) and respond to many other questions.
This lack of detail undermines the value of the paper and questions how the fairness of the dataset was assessed (and whether that fairness is validated).
It would be kind from the authors to answer these points.
Hello, I'm interested in the training configuration, or seeing the training code.
The text was updated successfully, but these errors were encountered: