Skip to content

Commit ba90682

Browse files
ZJUGuoShuaitianyizheng02
authored andcommitted
Add KL divergence loss algorithm (TheAlgorithms#11238)
* Add KL divergence loss algorithm * Apply suggestions from code review --------- Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
1 parent 15d860a commit ba90682

File tree

1 file changed

+34
-0
lines changed

1 file changed

+34
-0
lines changed

machine_learning/loss_functions.py

+34
Original file line numberDiff line numberDiff line change
@@ -629,6 +629,40 @@ def smooth_l1_loss(y_true: np.ndarray, y_pred: np.ndarray, beta: float = 1.0) ->
629629
return np.mean(loss)
630630

631631

632+
def kullback_leibler_divergence(y_true: np.ndarray, y_pred: np.ndarray) -> float:
633+
"""
634+
Calculate the Kullback-Leibler divergence (KL divergence) loss between true labels
635+
and predicted probabilities.
636+
637+
KL divergence loss quantifies dissimilarity between true labels and predicted
638+
probabilities. It's often used in training generative models.
639+
640+
KL = Σ(y_true * ln(y_true / y_pred))
641+
642+
Reference: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
643+
644+
Parameters:
645+
- y_true: True class probabilities
646+
- y_pred: Predicted class probabilities
647+
648+
>>> true_labels = np.array([0.2, 0.3, 0.5])
649+
>>> predicted_probs = np.array([0.3, 0.3, 0.4])
650+
>>> kullback_leibler_divergence(true_labels, predicted_probs)
651+
0.030478754035472025
652+
>>> true_labels = np.array([0.2, 0.3, 0.5])
653+
>>> predicted_probs = np.array([0.3, 0.3, 0.4, 0.5])
654+
>>> kullback_leibler_divergence(true_labels, predicted_probs)
655+
Traceback (most recent call last):
656+
...
657+
ValueError: Input arrays must have the same length.
658+
"""
659+
if len(y_true) != len(y_pred):
660+
raise ValueError("Input arrays must have the same length.")
661+
662+
kl_loss = y_true * np.log(y_true / y_pred)
663+
return np.sum(kl_loss)
664+
665+
632666
if __name__ == "__main__":
633667
import doctest
634668

0 commit comments

Comments
 (0)