Releases: BloodAxe/pytorch-toolbelt
PyTorch Toolbelt 0.4.3
PyTorch Toolbelt 0.4.3
Modules
- Added missing
sigmoid
activation support toget_activation_block
- Make Encoders support JIT & Tracing
- Better support for encoders from
timm
(They named with prefixTimm
)
Utils
rgb_image_from_tensor
now clip values
TTA & Ensembling
Ensembler
now supports arithmetic, geometric & harmonic averaging viareduction
parameter.- Bring geometric & harmonic averaging to all TTA functions as well
Datasets
read_binary_mask
- Refactor
SegmentationDataset
to support strided masks for deep supervision - Added
RandomSubsetDataset
andRandomSubsetWithMaskDataset
to sample dataset based on some condition (E.g. sample only samples of particular class)
Other
As usual, more tests, better type annotations & comments
PyTorch Toolbelt 0.4.2
Breaking Changes
- Bump up minimal PyTorch version to 1.7.1
New features
- New dataset classes
ClassificationDataset
,SegmentationDataset
for easy every-day use in Kaggle - New losses:
FocalCosineLoss
,BiTemperedLogisticLoss
,SoftF1Loss
- Support of new activations for
get_activation_block
(Silu, Softplus, Gelu) - More encoders from timm package: NFNets, NFRegNet, HRNet, DPN
RocAucMetricCallback
for CatalystMultilabelAccuracyCallback
andAccuracyCallback
with DDP support
Bugfixes
- Fix invalid prefix in catalyst registry to from
tbt
totbt.
PyTorch Toolbelt 0.4.1
New features
- Added Soft-F1 loss for direct optimization of F1 score (Binary case only)
- Fully rework TTA (Kept backward compatibility where it's possible) module for inference.
- Added support of
ignore_index
to Dice & Jaccard losses. - Improved Lovasz loss to work in
fp16
mode. - Added option to override selected params in
make_n_channel_input
. - More Encoders, from
timm
package. FPNFuse
module not works on 2D, 3D and N-D inputs.- Added Global K-Max 2D pooling block.
- Added Generalized mean pooling 2D block.
- Added
softmax_over_dim_X
,argmax_over_dim_X
shorthand functions for use in metrics to get soft/hard labels without using lambda functions. - Added helper visualization functions to add fancy header to image, stack images of different sizes.
- Improved rendering of confusion matrix.
Catalyst goodies
- Encoders & Losses are available in Catalyst registry
StopIfNanCallback
- Added
OutputDistributionCallback
to log distribtion of predictions to TensorBoard. - Added
UMAPCallback
to visualize embedding space using UMAP in TensorBoard.
Breaking Changes
- Renamed
CudaTileMerger
toTileMerger
.TileMerger
allows to specify target device explicitly. tensor_from_rgb_image
removed in favor ofimage_to_tensor
.
Bug fixes & Improvements
- Improve numeric stability of
focal_loss_with_logits
whenreduction="sum"
- Prevent
NaN
in FocalLoss when all elements are equal toignore_index
value. - A LOT of type hints.
PyTorch Toolbelt 0.4.0
New features
- Memory-efficient
Swish
andMish
activation functions (Credits goes to http://github.com/rwightman/pytorch-image-models) - Refactor EfficientNet encoders (no pretrained weights yet)
Fixes
- Fixed incorrect default value for
ignore_index
inSoftCrossEntropyLoss
Breaking changes
- All catalyst-related utils updated to be compatible with Catalyst 20.8.2
- Remove PIL package dependency
Improvements
- More comments, more type hints
Pytorch Toolbelt 0.3.2
New features
- Many helpful callbacks for Catalyst library: HyperParameterCallback, LossAdapter to name a few.
- New losses for deep model supervision (Helpful, when size of target and output mask are different)
- Stacked Hourglass encoder
- Context Aggregation Network decoder
Breaking Changes
-
ABN module will now resolve as nn.Sequential(BatchNorm2d, Activation) instead of a hand-crafted module. This enables easier conversion of batch normalization modules to the nn.SyncBatchNorm.
-
Almost every Encoder/Decoder implementation has been refactored for better clarity and flexibility. Please double-check your pipelines.
Important bugfixes
- Improved numerical stability of Dice / Jaccard losses (Using log_sigmoid() + exp() instead of plain sigmoid() )
Other
- A lots of comments for functions and modules
- Code cleanup, thanks for DeepSource
- Type annotations for modules and functions
- Update of README
Pytorch toolbelt 0.3.1
Fixes
- Fixed bug in computation IoU metric in
binary_dice_iou_score
function - Fixed incorrect default value in
SoftCrossEntropyLoss
#38
Improvements
- Function
draw_binary_segmentation_predictions
now has parameterimage_format
(rgb
|bgr
|gray
) to specify format of the image to visualize correctly images in TB - More type annotations across the codebase
New features
- New visualization function
draw_multilabel_segmentation_predictions
Pytorch Toolbel 0.3.0
Pytorch Toolbel 0.3.0
This release has a huge set of new features, bugfixes and breaking changes. So be careful, when upgrading.
pip install pytorch-toolbelt==0.3.0
New features
Encoders
- HRNetV2
- DenseNets
- EfficientNet
Encoder
class haschange_input_channels
method to change number of channels in input image
New losses
BCELoss
with support ofignore_index
SoftBCELoss
(Label smoothing loss for binary case with support ofignore_index
)SoftCrossEntropyLoss
(Label smoothing loss for multiclass case with support ofignore_index
)
Catalyst goodies
- Online pseudolabeling callback
- Training signal annealing callback
Other
- New activation functions support in
ABN
block: Swish, Mish, HardSigmoid - New decoders (Unet, FPN, DeeplabV3, PPM) to simplify creation of segmentation models
CREDITS.md
to include all the references to code/articles. Existing list is definitely not complete, so feel free to make PR's- Object context block from OCNet
API changes
- Focal loss now supports normalized focal loss and reduced focal loss extensions.
- Optimize computation of pyramid weight matrix #34
- Default value
align_corners=False
inF.interpolate
when doing bilinear upsampling.
Bugfixes
- Fix missing call to batch normalization block in
FPNBottleneckBN
- Fix numerical stability for
DiceLoss
andJaccardLoss
whenlog_loss=True
- Fix numerical stability when computing normalized focal loss
PyTorch Toolbelt 0.2.1
New features
- Added normalized focal loss
Bugfixes
- Fixed wrong shape of intermediate layers of DenseNet
PyTorch Toolbelt 0.2.0
PyTorch Toolbelt 0.2.0
This release dedicated to housekeeping work. Dice/IoU metrics and losses have been redesigned to reduce amount of duplicated code and bring more clarity. Code is now auto-formatted using Black.
pip install pytorch_toolbelt==0.2.0
Catalyst contrib
- Refactor Dice/IoU loss into single metric
IoUMetricsCallback
with a few cool features:metric="dice|jaccard"
to choose what metric should be used;mode=binary|multiclass|multilabel
to specify problem type (binary, multiclass or multi-label segmentation)';classes_of_interest=[1,2,4]
to select for which set of classes metric should be computed andnan_score_on_empty=False
to computeDice Accuracy
(Counts as a 1.0 if bothy_true
andy_pred
are empty; 0.0 ify_pred
is not empty). - Added L-p regularization callback to apply L1 and L2 regularization to model with support of regularization strength scheduling.
Losses
- Refactor
DiceLoss
/JaccardLoss
losses in a same fashion as metrics.
Models
- Add Densenet encoders
- Bugfix: Fix missing BN+Relu in
UNetDecoder
- Global pooling modules can squeeze spatial channel dimensions if
flatten=True
.
Misc
- Add more unit tests
- Code-style is now managed with Black
to_numpy
now supportsint
,float
scalar types