Skip to content

Commit abd6bca

Browse files
ank426pre-commit-ci[bot]tianyizheng02
authored
Added Binary Focal Cross Entropy (#10674)
* Added Binary Focal Cross Entropy * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed Issue * Fixed Issue * Added BFCE loss to loss_functions.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update machine_learning/loss_functions.py --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
1 parent fdb0635 commit abd6bca

File tree

1 file changed

+51
-0
lines changed

1 file changed

+51
-0
lines changed

machine_learning/loss_functions.py

+51
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,57 @@ def binary_cross_entropy(
3939
return np.mean(bce_loss)
4040

4141

42+
def binary_focal_cross_entropy(
43+
y_true: np.ndarray,
44+
y_pred: np.ndarray,
45+
gamma: float = 2.0,
46+
alpha: float = 0.25,
47+
epsilon: float = 1e-15,
48+
) -> float:
49+
"""
50+
Calculate the mean binary focal cross-entropy (BFCE) loss between true labels
51+
and predicted probabilities.
52+
53+
BFCE loss quantifies dissimilarity between true labels (0 or 1) and predicted
54+
probabilities. It's a variation of binary cross-entropy that addresses class
55+
imbalance by focusing on hard examples.
56+
57+
BCFE = -Σ(alpha * (1 - y_pred)**gamma * y_true * log(y_pred)
58+
+ (1 - alpha) * y_pred**gamma * (1 - y_true) * log(1 - y_pred))
59+
60+
Reference: [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf)
61+
62+
Parameters:
63+
- y_true: True binary labels (0 or 1).
64+
- y_pred: Predicted probabilities for class 1.
65+
- gamma: Focusing parameter for modulating the loss (default: 2.0).
66+
- alpha: Weighting factor for class 1 (default: 0.25).
67+
- epsilon: Small constant to avoid numerical instability.
68+
69+
>>> true_labels = np.array([0, 1, 1, 0, 1])
70+
>>> predicted_probs = np.array([0.2, 0.7, 0.9, 0.3, 0.8])
71+
>>> binary_focal_cross_entropy(true_labels, predicted_probs)
72+
0.008257977659239775
73+
>>> true_labels = np.array([0, 1, 1, 0, 1])
74+
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 0.2])
75+
>>> binary_focal_cross_entropy(true_labels, predicted_probs)
76+
Traceback (most recent call last):
77+
...
78+
ValueError: Input arrays must have the same length.
79+
"""
80+
if len(y_true) != len(y_pred):
81+
raise ValueError("Input arrays must have the same length.")
82+
# Clip predicted probabilities to avoid log(0)
83+
y_pred = np.clip(y_pred, epsilon, 1 - epsilon)
84+
85+
bcfe_loss = -(
86+
alpha * (1 - y_pred) ** gamma * y_true * np.log(y_pred)
87+
+ (1 - alpha) * y_pred**gamma * (1 - y_true) * np.log(1 - y_pred)
88+
)
89+
90+
return np.mean(bcfe_loss)
91+
92+
4293
def categorical_cross_entropy(
4394
y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15
4495
) -> float:

0 commit comments

Comments
 (0)