Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems regarding loss function and auroc #39

Open
WHan-alter opened this issue Nov 27, 2022 · 0 comments
Open

Problems regarding loss function and auroc #39

WHan-alter opened this issue Nov 27, 2022 · 0 comments

Comments

@WHan-alter
Copy link

Dear Authors,

Thank you for sharing the work and code! I have some problems regarding the AUC-ROC and loss function part. In

dMaSIF/data_iteration.py

Lines 202 to 215 in 0dcc26c

pos_indices = torch.randperm(len(pos_labels))[:n_points_sample]
neg_indices = torch.randperm(len(neg_labels))[:n_points_sample]
pos_preds = pos_preds[pos_indices]
pos_labels = pos_labels[pos_indices]
neg_preds = neg_preds[neg_indices]
neg_labels = neg_labels[neg_indices]
preds_concat = torch.cat([pos_preds, neg_preds])
labels_concat = torch.cat([pos_labels, neg_labels])
loss = F.binary_cross_entropy_with_logits(preds_concat, labels_concat)
return loss, preds_concat, labels_concat

You seemed to sample a random number of positive and negative samples to compute the loss and AUC-ROC. For loss, this could be fine, but regarding the AUC-ROC, did you use the same pipeline for the test dataset when measuring the model performance? If so, what's the performance given the entire surface?

Best,
Wenkai

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant