Skip to content
This repository has been archived by the owner on Nov 22, 2022. It is now read-only.

Question on AUC-PR Hinge Loss #1721

Open
elbaro opened this issue Jul 22, 2022 · 0 comments
Open

Question on AUC-PR Hinge Loss #1721

elbaro opened this issue Jul 22, 2022 · 0 comments

Comments

@elbaro
Copy link

elbaro commented Jul 22, 2022

In the implementation of "Scalable Learning of Non-Decomposable Objectives" at https://github.com/facebookresearch/pytext/blob/main/pytext/loss/loss.py ,

the positive weight is 1 + lambda * (1 - precision) instead of (1 + lambda) * (1 - precision).
The old implementation in the tensorflow repo (gone now) was also 1 + lambda * (1 - precision)

To me (1 + lambda) looks like coming from the following equation in the paper (https://arxiv.org/pdf/1608.04802.pdf)
image

Dividing each side by N = |Y-| + |Y+| and multiplying (1-precision), we get the first term (1+lambda)(1-precision)(loss+)/N:

loss = (1+lambda)(loss+) + lambda ( precision/(1-precision) ) (loss-) - lambda (# positives)

per-sample loss = (1+lambda)(loss+)/N + lambda ( precision/(1-precision) ) (loss-)/N - lambda (# positives)/N

multiplied per-sample loss =
          (1+lambda)(1-precision)(loss+)/N + lambda * precision * (loss-)/N - lambda (1-precision) (# positives)/N
          ^^^^^^^^^^^^^^^^^^^^^^^

Why is it 1 + lambda * (1-precision) ?

        hinge_loss = loss_utils.weighted_hinge_loss(
            labels.unsqueeze(-1),
            logits.unsqueeze(-1) - self.biases,
            positive_weights=1.0 + lambdas * (1.0 - self.precision_values),
            negative_weights=lambdas * self.precision_values,
        )
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant