Releases: KevinMusgrave/pytorch-metric-learning
Releases · KevinMusgrave/pytorch-metric-learning
v2.8.1
v2.8.0
v2.7.0
v2.6.0
Improvement + small breaking change to DistributedLossWrapper
- Changed the
emb
argument ofDistributedLossWrapper.forward
toembeddings
to be consistent with the rest of the library. - Added a warning and early-return when
DistributedLossWrapper
is being used in a non-distributed setting. - Thank you @elisim!
v2.5.0
Improvements
Thanks @mkmenta !
v2.4.1
This is identical to v2.4.0, but includes the LICENSE file which was missing from v2.4.0.
v2.4.0
Features
- Added DynamicSoftMarginLoss. See PR #659. Thanks @domenicoMuscill0!
- Added RankedListLoss. See PR #659. Thanks @domenicoMuscill0!
Bug fixes
- Fixed issue where PNPLoss would return NaN when a batch sample had no corresponding positive. See PR #660. Thanks @Puzer and @interestingzhuo!
Tests
- Fixed the test for HistogramLoss to work with PyTorch 2.1. Thanks @GaetanLepage!
v2.3.0
Features
- Added HistogramLoss. See pull request #651. Thanks @domenicoMuscill0!
v2.2.0
Features
- Added ManifoldLoss. See pull request #635. Thanks @domenicoMuscill0!
- Added P2SGradLoss. See pull request #635. Thanks @domenicoMuscill0!
- Added the
symmetric
flag to SelfSupervisedLoss. IfTrue
, then the embeddings in bothembeddings
andref_emb
are used as anchors. IfFalse
, then only the embeddings inembeddings
are used as anchors. The previous behavior was equivalent tosymmetric=False
. Now the default issymmetric=True
, because this is usually what is done in self supervised papers (e.g. SimCLR).