Continual Learning for LiDAR Semantic Segmentation: Class-Incremental and Coarse-to-Fine strategies on Sparse Data
Elena Camuffo and Simone Milani, In Proceedings of IEEE Conference of Computer Vision and Pattern Recognition Workshops (CVPRW), CLVision, 2023. [Paper]
Our codebase is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
-
Pretrained models are coming soon...
-
Codebase released!
During the last few years, continual learning (CL) strategies for image classification and segmentation have been widely investigated designing innovative solutions to tackle catastrophic forgetting, like knowledge distillation and self-inpainting. However, the application of continual learning paradigms to point clouds is still unexplored and investigation is required, especially using architectures that capture the sparsity and uneven distribution of LiDAR data. The current paper analyzes the problem of class incremental learning applied to point cloud semantic segmentation, comparing approaches and state-of-the-art architectures. To the best of our knowledge, this is the first example of class-incremental continual learning for LiDAR point cloud semantic segmentation. Different CL strategies were adapted to LiDAR point clouds and tested, tackling both classic fine-tuning scenarios and the Coarse-to-Fine learning paradigm. The framework has been evaluated through two different architectures on SemanticKITTI, obtaining results in line with state-of-the-art CL strategies and standard offline learning.
If you find our work useful for your research, please consider citing:
@InProceedings{Camuffo_2023_CVPR,
author = {Camuffo, Elena and Milani, Simone},
title = {Continual Learning for LiDAR Semantic Segmentation: Class-Incremental and Coarse-To-Fine Strategies on Sparse Data},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2023},
pages = {2447-2456}
}
Here above are the generic instruction to run the codebase.
conda env create -f clpcss.yml
-
CL
: whether to select offline training or enable class-incremental continual learning -
CLstep
: which learning step ofCL
you want to train on. -
CLstrategy
: which strategy to mitigate forgetting you want to use (e.g., output knowledge distillationokd
or feature knowledge distillationfkd
). -
setup
: which setup you are willing to use (e.g., Sequential, Sequential_masked, etc.) -
pretrained_model
: if you want to use as a pretrained model, i.e., if you are trainingCLstep = 1
, you want to use a pretrained on stepCLstep = 0
. -
ckpt_file
: path of the checkpoint file of the pretrained model ifpretrained_model = True
. -
test_name
: the name you want to give to your test.
-
Offline:
python train_c2f.py --CL False [params]
-
Standard CIL Setups:
python train.py --CL True --setup "Sequential" --CLstep 0 [params]
-
Knowledge Distillation (output)
$\mathcal{L}_{KD}$ :python train_envelope.py --CLstrategy "okd" --pretrained_model True --ckpt_file "path\to\pretrained.pt" [params]
-
Knowledge Distillation (
$\ell_2$ feats)$\mathcal{L}^{*}_{KD}$ :python train_envelope.py --CLstrategy "fkd" --pretrained_model True --ckpt_file "path\to\pretrained.pt" [params]
-
Knowledge Self-Inpainting:
python train_envelope.py --CLstrategy "inpaint" --pretrained_model True --ckpt_file "path\to\pretrained.pt" [params]
-
C2F Setups:
python train_c2f.py --CLstep 0 [params]