-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about penalty #47
Comments
这里是要加一个最小化熵作为惩罚项吧,但是按照现在的写法pred_mean越大,loss越小,还是说有其他的考虑呢 |
举一个极端的例子,假设3个分类数量均衡,每个batch有三个样本 |
hi, bro, |
Hi, @YuanShunJie1 , I am a freshman who is studying noisy label learning, warm-up is used in the early training stage because the network will overfit the clean samples in the early training stage(these samples have small loss values), so using warm-up in the early stage could do the Co-divide operation to distinguish the clean label or noisy label. That is my understanding. |
prior = torch.ones(args.num_class)/args.num_class
prior = prior.cuda()
pred_mean = torch.softmax(logits, dim=1).mean(0)
penalty = torch.sum(prior*torch.log(prior/pred_mean))
entropy=p*log(p) why not penalty =
torch.sum(pred_mean*torch.log(prior/pred_mean))
The text was updated successfully, but these errors were encountered: