Skip to content
Discussion options

You must be logged in to vote

Yes we have implemented proper backward gradient propagation wrt the exact OT objective but keep in mind that for both it corresponds only to sub-gradients since the objective is indeed not differentiable (relu is not either). We have examples that show that you can optimize through our loss in the documentation.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@WANG-SUI
Comment options

Answer selected by rflamary
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #776 on October 28, 2025 11:38.