You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
This is not so much of an issue but rather a question.
I want use the MaskedAutoencoderGroupChannelViT, particularly the mae_vit_base_patch16_dec512d8b to finetune on some regional sentinel2A data. The issue I am having is the weights provided by the repository ViT-Base (200 epochs) is for the GroupChannelsVisionTransformer or in this instance vit_base_patch16 model.
I am speculating here, but what I feel is the ssl training was conducted using mae_vit_base_patch16 model but only the backbone (which I am assuming is vit_base_patch16) weights were saved, which is common.
In usual circumstances you would only need this backbone for the downstream task.
But in my case I want to use this mae_vit_base_patch16_dec512d8b model for further pretraining (or finetuning) on regional dataset. So I am wondering if there is a way to load these pretrain weights of the backbone to the mae_vit_base_patch16_dec512d8b model, which the can be used for further pretraining on a region specific smaller dataset.
I am trying to achive something down the lines of,
# Load satmae model
sat_mae = mae_vit_base_patch16_dec512d8b(args)
# Get backbone
vit_backbone = sat_mae.backbone
# Load backbone weights
checkpoint = torch.load(".../weights.ckpt")
checkpoint_model = checkpoint["model"]
vit_backbone.load_state_dict(checkpoint_model, strict=False)
# Resign p;retrained backbone back to satmae
sat_mae.backbone = vit_backbone
# Then use satmae for further pretraining
Any Ideas how I can achieve this ?
Thanks
The text was updated successfully, but these errors were encountered:
Hi
This is not so much of an issue but rather a question.
I want use the
MaskedAutoencoderGroupChannelViT
, particularly themae_vit_base_patch16_dec512d8b
to finetune on some regional sentinel2A data. The issue I am having is the weights provided by the repository ViT-Base (200 epochs) is for theGroupChannelsVisionTransformer
or in this instancevit_base_patch16
model.I am speculating here, but what I feel is the ssl training was conducted using
mae_vit_base_patch16
model but only the backbone (which I am assuming isvit_base_patch16
) weights were saved, which is common.In usual circumstances you would only need this backbone for the downstream task.
But in my case I want to use this
mae_vit_base_patch16_dec512d8b
model for further pretraining (or finetuning) on regional dataset. So I am wondering if there is a way to load these pretrain weights of the backbone to themae_vit_base_patch16_dec512d8b
model, which the can be used for further pretraining on a region specific smaller dataset.I am trying to achive something down the lines of,
Any Ideas how I can achieve this ?
Thanks
The text was updated successfully, but these errors were encountered: