-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support transformer backbone #465
Conversation
Codecov Report
@@ Coverage Diff @@
## master #465 +/- ##
=========================================
Coverage ? 86.53%
=========================================
Files ? 98
Lines ? 5133
Branches ? 829
=========================================
Hits ? 4442
Misses ? 534
Partials ? 157
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There are some unresolved comments. |
* vit backbone * fix lint * add docstrings and fix pretrained pos_embed dim not match prob * add unittest for vit * fix lint * add vit based fcn configs * fix import error * support multiple resolution input images * upsample pos_embed at init_weights * support resize pos_embed at evaluation * fix training errors * add more unitest code for vit backbone * unitest for uncovered code * add norm_eval unittest * refactor _pos_embeding * minor change * change var name * rafactor init_weight * load weights after resize * ignore 'module' in pretrain checkpoint * add with_cp * add with_cp Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
* update tutorials * add 0_config.md
* a little bit faster confusion matrix * add changelog
Support transformer backbone.
This PR modified from https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py.
Usage
Backbone config should be like this.
We can download checkpoints from https://github.com/rwightman/pytorch-image-models by setting pretrained