To this day, ResNets are still a strong baseline for your image classification tasks. You can find the Burn implementation for the ResNet variants in resnet.rs.
The model is no_std compatible.
Add this to your Cargo.toml
:
[dependencies]
resnet-burn = { git = "https://github.com/tracel-ai/models", package = "resnet-burn", default-features = false }
If you want to get the pre-trained ImageNet weights, enable the pretrained
feature flag.
[dependencies]
resnet-burn = { git = "https://github.com/tracel-ai/models", package = "resnet-burn", features = ["pretrained"] }
Important: this feature requires std
.
The inference example initializes a ResNet-18 from the
ImageNet
pre-trained weights
with the NdArray
backend and performs inference on the provided input image.
You can run the example with the following command:
cargo run --release --example inference samples/dog.jpg
For this multi-label image classification fine-tuning example, a sample of the planets dataset from the Kaggle competition Planet: Understanding the Amazon from Space is downloaded from a fastai mirror. The sample dataset is a collection of satellite images with multiple labels describing the scene, as illustrated below.
To achieve this task, a ResNet-18 pre-trained on the ImageNet dataset is fine-tuned on the target planets dataset. The training recipe used is fairly simple. The main objective is to demonstrate how to re-use a pre-trained model for a different downstream task.
Without any bells and whistle, our model achieves over 90% multi-label accuracy (i.e., hamming score) on the validation set after just 5 epochs.
Run the example with the Torch GPU backend:
export TORCH_CUDA_VERSION=cu121
cargo run --release --example finetune --features tch-gpu
Run it with our WGPU backend:
cargo run --release --example finetune --features wgpu