Donghyun Kim1*, Byeongho Heo2, Dongyoon Han2*
We revitalize Densely Connected Convolutional Networks (DenseNets) and reveal their untapped potential to challenge the prevalent dominance of ResNet-style architectures. Our research indicates that DenseNets were previously underestimated, primarily due to conventional design choices and training methods that underexploited their full capabilities.
Tradeoff with RDNet (ours) and SOTA models
Tradeoff with RDNet (ours) and mainstream models
- Pilot study (§5.1) reveals concatenations' effectiveness.
- We have meticulously upgraded various aspects of DenseNets (§3.2) through architectural tweaks and block redesigns.
- Our revitalized DenseNets (RDNets) outperform mainstream architectures such as Swin Transformer, ConvNeXt, and DeiT-III (§4.1).
Our work aims to reignite interest in DenseNets by demonstrating their renewed relevance and superiority in the current architectural landscape. We encourage the community to explore and build upon our findings, paving the way for further innovative contributions in deep learning architectures.
We believe that various architectural designs that have been popular recently would be combined with dense connections successfully.
RDNet is available on timm
. You can easily use RDNet by installing the timm
package.
import timm
model = timm.create_model('rdnet_large', pretrained=True)
For detailed usage, please refer to the huggingface model card.
- (2024.07.24): Pip installable pacakge added.
- (2024.04.19): Initial release of the repository.
- (2024.03.28): Paper is available on arXiv.
- More ImageNet-22k Pretrained Models.
- More ImageNet-1k fine-tuned models.
- Cascade Mask R-CNN with RDNet.
- Transfer Learning with RDNet (with cifar10, cifar100, stanford cars, ...).
For details on object detection and instance segmentation, please refer to detection/README.md.
For details on semantic segmentation, please refer to segmentation/README.md.
We provide the pretrained models for RDNet. You can download the pretrained models from the links below.
Model | IMG Size | Params | FLOPs | Top-1 | Model Card | url |
---|---|---|---|---|---|---|
RDNet-T | 224 | 22M | 5.0G | 82.8 | model_card | HFHub |
RDNet-S | 224 | 50M | 8.7G | 83.7 | model_card | HFHub |
RDNet-B | 224 | 87M | 15.4G | 84.4 | model_card | HFHub |
RDNet-L | 224 | 186M | 34.7G | 84.8 | model_card | HFHub |
Model | fine-tune from | IMG Size | Params | FLOPs | Top-1 | Model Card | url |
---|---|---|---|---|---|---|---|
RDNet-L (384) | RDNet-L | 384 | 186M | 101.9G | 85.8 | model_card | HFHub |
We provide the graphs of the training procedure. The graph is generated by the Weights & Biases service. You can check the graph by clicking the link below.
https://api.wandb.ai/links/dhkim0225/822w2zsj
For training commands, please refer to the TRAINING.md.
This repository is built using the timm, MMDetection, and MMSegmentation.
@misc{kim2024densenets,
title={DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs},
author={Donghyun Kim and Byeongho Heo and Dongyoon Han},
year={2024},
eprint={2403.19588},
archivePrefix={arXiv},
}
Copyright (c) 2024-present NAVER Cloud Corp.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
ImageNet - ImageNet Terms of access, https://image-net.org/download
Images from ADE20K - ADE20K Terms of Use, https://groups.csail.mit.edu/vision/datasets/ADE20K/terms/
MS COCO images dataset - Creative Commons Attribution 4.0 License, https://viso.ai/computer-vision/coco-dataset/