v0.9.87
v0.9.87 comes with some major changes that may cause your existing code to break.
BREAKING CHANGES
Losses
- The
avg_non_zero_only
init argument has been removed fromContrastiveLoss
,TripletMarginLoss
, andSignalToNoiseRatioContrastiveLoss
. Here's how to translate from old to new code:avg_non_zero_only=True
: Just remove this input parameter. Nothing else needs to be done as this is the default behavior.avg_non_zero_only=False
: Remove this input parameter and replace it withreducer=reducers.MeanReducer()
. You'll need to add this to your imports:from pytorch_metric_learning import reducers
learnable_param_names
andnum_class_per_param
has been removed fromBaseMetricLossFunction
due to lack of use.- MarginLoss is the only built-in loss function that is affected by this. Here's how to translate from old to new code:
learnable_param_names=["beta"]
: Remove this input parameter and instead pass inlearn_beta=True
.num_class_per_param=N
: Remove this input parameter and instead pass innum_classes=N
.
- MarginLoss is the only built-in loss function that is affected by this. Here's how to translate from old to new code:
AccuracyCalculator
- The
average_per_class
init argument is nowavg_of_avgs
. The new name better reflects the functionality. - The old way to import was:
from pytorch_metric_learning.utils import AccuracyCalculator
. This will no longer work. The new way is:from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
. The reason for this change is to avoid an unnecessary import of the Faiss library, especially when this library is used in other packages.
New feature: Reducers
Reducers specify how to go from many loss values to a single loss value. For example, the ContrastiveLoss computes a loss for every positive and negative pair in a batch. A reducer will take all these per-pair losses, and reduce them to a single value. Here's where reducers fit in this library's flow of filters and computations:
Your Data --> Sampler --> Miner --> Loss --> Reducer --> Final loss value
Reducers are passed into loss functions like this:
from pytorch_metric_learning import losses, reducers
reducer = reducers.SomeReducer()
loss_func = losses.SomeLoss(reducer=reducer)
loss = loss_func(embeddings, labels) # in your training for-loop
Internally, the loss function creates a dictionary that contains the losses and other information. The reducer takes this dictionary, performs the reduction, and returns a single value on which .backward()
can be called. Most reducers are written such that they can be passed into any loss function.
See the documentation for details.
Other updates
Utils
Inference
InferenceModel
has been added to the library. It is a model wrapper that makes it convenient to find matching pairs within a batch, or from a set of pairs. Take a look at this notebook to see example usage.
AccuracyCalculator
- The
k
value for k-nearest neighbors can optionally be specified as an init argument. - k-nn based metrics now receive knn distances in their kwargs. See #118 by @marijnl
Other stuff
Unit tests were added for almost all losses, miners, regularizers, and reducers.
Bug fixes
Trainers
Loss and miner utils
- Fixed bug where
convert_to_triplets
could encounter a RuntimeError. See #95