You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 15, 2024. It is now read-only.
Hello. I wonder, what is order of magnitude of the learning rate, that one should take for fine-tuning on ImageNet with the input resolution 384, having taken the 224 DeiT-Tiny pretrained model.
There are discussions in this repo for transfer learning on other datasets (CIFAR-10, iNaturalist)- #105, #45.
Would the learning rate of order 5e-6 - 1e-5 be the optimal choice for finetuning on ImageNet with higher resolution, assuming all the other optimizer settings are kept default - mixup, cutmix, adamW as optimizer, etc. ?
Thanks in advance
The text was updated successfully, but these errors were encountered:
Hi @Godofnothing ,
Thanks for your question,
It is possible to keep the same setting and change only the learning rate for the fine-tuning.
The optimal lr depends on the model and the number of epochs of fine tuning. I think that 1e-5 is a good start.
Best,
Hugo
Hello. I wonder, what is order of magnitude of the learning rate, that one should take for fine-tuning on ImageNet with the input resolution 384, having taken the 224 DeiT-Tiny pretrained model.
There are discussions in this repo for transfer learning on other datasets (CIFAR-10, iNaturalist)- #105, #45.
Would the learning rate of order 5e-6 - 1e-5 be the optimal choice for finetuning on ImageNet with higher resolution, assuming all the other optimizer settings are kept default - mixup, cutmix, adamW as optimizer, etc. ?
Thanks in advance
The text was updated successfully, but these errors were encountered: