v0.2.0: Improved performances, API boilerplate and demo app
This release greatly improves classification performances and adds numerous tools to deploy or showcase your models.
Note: holocron 0.2.0 requires PyTorch 1.9.1 and torchvision 0.10.1 or newer.
Highlights
🦓 New entries in the model zoo
RepVGG joins the model zoo to provide an interesting change of pace: using two forward-wise equivalent architectures, one for the training and the other for the inference.
This brings a very good balance between inference speed and performances for VGG-like models, as it outclasses several ResNet architectures (cf. https://github.com/frgfm/Holocron/tree/master/references/classification).
📑 Tutorial notebooks
To reduce friction between users and domain experts, a few tutorials were added to the documentation in the form of notebooks.
Thanks to Google Colab, you can run all the commands on a GPU without owning one 👍
💻 API boilerplate
Ever dreamt of deploying a small REST API to expose your vision models?
Using the great FastAPI library, a minimal API template was implemented for you to easily deploy models in containerized environments.
Once your API is running, the following snippet:
import requests
with open('/path/to/your/img.jpeg', 'rb') as f:
data = f.read()
response = requests.post("http://localhost:8002/classification", files={'file': data}).json()
yields:
{'value': 'French horn', 'confidence': 0.9186984300613403}
For more information, please refer to the dedicated README.
🎮 Gradio demo
To better showcase the capabilities of the pre-trained models, a small demo app was added to the project (with a live version hosted on HuggingFace Spaces).
It was built for basic image classification using Gradio.
🤗 Integration with HuggingFace model hub
In order to have a more open way to contribute/share models, default configuration dicts are now accessible in every model. Thanks to this and HuggingFace Hub, checkpoints can be hosted freely (cf. https://huggingface.co/frgfm/repvgg_a0), and you can instantiate models from this.
from holocron.models.utils import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a0").eval()
This opens the way for external contributors to upload their own checkpoint & config, and use Holocron seamlessly.
⚡ Cutting-edge training scripts
This release comes with major upgrades for the reference scripts, in two aspects:
- speed: adding support of Average Mixed Precision (AMP)
- performance: updated the default augmentations, adding new optimizers (AdamP, AdaBelief) and regularization methods (mixup)
Those should help you to reach better results with your own experiments.
Breaking changes
License update
To better reflect the spirit of the projects of welcoming contributions from everywhere, the license was changed from MIT to Apache 2. This shouldn't impact your usage much as it is one of the most commonly used licenses for open source.
Deprecated features now supported by PyTorch
Since Holocron is meant as an addon to PyTorch/Torchvision, a few features have been deprecated as they were integrated into PyTorch. Those include:
- activations:
SiLU
,Mish
- optimizer:
RAdam
Naming of trainer's method
The trainer's method to determine the optimal learning rate had its name changed from lr_find
to find_lr
.
0.1.3 | 0.2.0 |
---|---|
>>> trainer = ... >>> trainer.lr_find() |
>>> trainer = ... >>> trainer.find_lr() |
Full changelog
Breaking Changes 🛠
- chore: Updated license from MIT to Apache2.0 by @frgfm in #130
- refactor: Removed implementations of nn that are now integrated in PyTorch by @frgfm in #157
- refactor: Removed implementations of nn that are now integrated into PyTorch by @frgfm in #158
- refactor: Removed functional legacy elements by @frgfm in #159
- refactor: Cleaned ref script args by @frgfm in #199
New Features 🚀
- feat: Added pretrained URL for SKNet-50 by @frgfm in #102
- feat: Added support of Triplet Attention by @frgfm in #104
- feat: Added support of RepVGG by @frgfm in #115
- feat: Added pretrained versions of RepVGG models by @frgfm in #116
- Adaptative classification trainer by @MateoLostanlen in #108
- feat: Added trainer for binary classification by @MateoLostanlen in #118
- feat: Added implementation of AdamP by @frgfm in #121
- feat: Added CIFAR to classification training option by @frgfm in #122
- feat: Added StackUpsample2d by @frgfm in #132
- docs: Switched to multi-version documentation by @frgfm in #134
- feat: Pretrained params for unet_rexnet13 by @frgfm in #139
- docs: Created code of conduct by @frgfm in #142
- feat: Added implementation of Involution layers by @frgfm in #144
- feat: Added support of AMP to trainers and training scripts by @frgfm in #153
- feat: Added custom weight decay for normalization layers by @frgfm in #162
- docs: Added latency benchmark in the README by @frgfm in #167
- feat: Added Dice Loss implementation by @frgfm in #191
- feat: Added default_cfg to all classification models by @frgfm in #193
- feat: Added FastAPI boilerplate for image classification by @frgfm in #195
- feat: Added Gradio demo app by @frgfm in #194
- feat: Added possibility to load model from HF Hub by @frgfm in #198
- docs: Added tutorial notebooks by @frgfm in #201
Bug Fixes 🐛
- fix: Fixed compatibility with pytorch 1.7.0 by @frgfm in #103
- chore: Fixed doc deploy by @frgfm in #105
- fix: Fixed SKNet model definitions by @frgfm in #106
- fix: Fixed CIoU aspect ratio term by @frgfm in #114
- docs: Fixed README typo by @MateoLostanlen in #117
- fix: Fixed UNet architecture and improved trainer by @frgfm in #127
- fix: Fixed console print from resume training by @frgfm in #129
- docs: Fixed typo in README by @frgfm in #133
- docs: Fixed multi-version references by @frgfm in #135
- fix: Fixed loss weight buffer by @frgfm in #136
- fix: Updated import of load_state_dict_from_url by @frgfm in #148
- chore: Cleaned package index mixup by @frgfm in #150
- fix: Fixed LR Finder plot scaling by @frgfm in #147
- docs: Fixed documentation build by @frgfm in #149
- fix: Fixed DropBlock2d drop_prob by @frgfm in #156
- fix: Fixed error message of optimizers by @frgfm in #161
- fix: Fixed LR Find when loss explodes by @frgfm in #169
- fix: Fixed classification training script for CIFAR by @frgfm in #171
- fix: Fixed param freezing by @frgfm in #175
- fix: Fixes MCLoss and RandomCrop in the segmentation training script by @frgfm in #177
- docs: Fixed latency section of the README by @frgfm in #178
- fix: Fixed LR find plotting by @frgfm in #180
- fix: Fixed multiple detection training & model issues by @frgfm in #182
- ci: Fixed script for PR label by @frgfm in #186
- ci: Fixed CI job for PR labels by @frgfm in #187
- ci: Added new CI job by @frgfm in #188
- ci: Fixed message & improved trigger by @frgfm in #190
Improvements
- chore: Updated package version and build jobs by @frgfm in #101
- feat: Updated training script by @frgfm in #89
- test: Refactored unittest for ClassificationTrainer by @frgfm in #119
- docs: Added issue templates by @frgfm in #120
- feat: Updated UNet and improved training scripts by @frgfm in #124
- test: Switched to pytest suite by @frgfm in #131
- feat: Improved Seg IoU computation and segmentation metrics by @frgfm in #137
- feat: Improved UNet architectures by @frgfm in #138
- style: Fixed typing of TridentNet by @frgfm in #141
- docs: Removed legacy entries and fixes models' documentation by @frgfm in #145
- style: Reordered imports and added isort check by @frgfm in #151
- refactor: Removes unused imports and updated README badge by @frgfm in #152
- refactor: Removed unused imports by @frgfm in #154
- feat: Improved Mixup design and added it to classification recipe by @frgfm in #155
- test: Increased coverage of holocron.optim by @frgfm in #160
- feat: Improved training scripts and added updated pretrained weights by @frgfm in #163
- docs: Improved documentation landing page by @frgfm in #165
- docs: Updated contribution guidelines and added utils by @frgfm in #166
- refactor: Removed unused imports, variables and wrappers by @frgfm in #168
- feat: Make bias addition automatic in conv_sequence by @frgfm in #170
- feat: Updates the backbone & docstring of YOLOv4 by @frgfm in #172
- style: Updated flake8 config by @frgfm in #174
- refactor: Refactored holocron.trainer by @frgfm in #173
- refactor: Updated arg of MCLoss by @frgfm in #176
- ci: Updated isort config and related CI job by @frgfm in #179
- feat: Added finite loss safeguard in trainer by @frgfm in #181
- refactor: Removed contiguous params since torch>=1.7.0 includes it by @frgfm in #183
- refactor: Updated timing function for latency eval by @frgfm in #184
- ci: Revamped CI and quality checks for upcoming release by @frgfm in #185
- ci: Updated message of PR label by @frgfm in #189
- ci: Moved header & deps checks to separate jobs by @frgfm in #192
- docs: Updates the README and documentation by @frgfm in #196
- docs: Added CITATION file by @frgfm in #197
- docs: Added example snippet & Colab ref in README by @frgfm in #202
Miscellaneous
New Contributors
- @MateoLostanlen made their first contribution in #117
Full Changelog: v0.1.3...v0.2.0