You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 22, 2022. It is now read-only.
the positive weight is 1 + lambda * (1 - precision) instead of (1 + lambda) * (1 - precision).
The old implementation in the tensorflow repo (gone now) was also 1 + lambda * (1 - precision)
In the implementation of "Scalable Learning of Non-Decomposable Objectives" at https://github.com/facebookresearch/pytext/blob/main/pytext/loss/loss.py ,
the positive weight is
1 + lambda * (1 - precision)
instead of(1 + lambda) * (1 - precision)
.The old implementation in the tensorflow repo (gone now) was also
1 + lambda * (1 - precision)
To me (1 + lambda) looks like coming from the following equation in the paper (https://arxiv.org/pdf/1608.04802.pdf)
Dividing each side by
N = |Y-| + |Y+|
and multiplying (1-precision), we get the first term(1+lambda)(1-precision)(loss+)/N
:Why is it
1 + lambda * (1-precision)
?The text was updated successfully, but these errors were encountered: