-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying MoCo augmentations breaks training #1622
Comments
Hey Dylan,
Thanks for the report! Can you post or describe the error you are getting?
(I will try to reproduce/fix on Monday if noone gets to it first)
…On Thu, Oct 5, 2023, 6:22 PM Dylan Stewart ***@***.***> wrote:
Description
Writing a custom set of augmentations and using those instead of the
defaults causes issues with training.
I fixed this the hacky way temporarily by modifying the BaseTask init:
L34 -> self.save_hyperparameters(ignore=['augmentation1','augmentation2'])
Steps to reproduce
Write custom set of augmentations
def custom_augmentations(size: int, kernel_size: tuple[int,int]) -> tuple[nn.Module, nn.Module]:
"""Data augmentations used by MoCo. Args: size: Size of patch to crop. Returns: Data augmentation pipelines. """
aug1 = aug2 = K.AugmentationSequential(
K.RandomResizedCrop(size=(size, size), scale=(0.8, 1)),
K.RandomBrightness(brightness=(0.8, 1.2), p=1.0),
K.RandomContrast(contrast=(0.8, 1.2), p=1.0),
K.RandomGaussianBlur(kernel_size=kernel_size,sigma=(0.1, 2)),
K.RandomMedianBlur(kernel_size=kernel_size),
K.RandomSharpness(),
K.RandomHorizontalFlip(),
K.RandomVerticalFlip(), # added
data_keys=["input"],
)
return aug1, aug2
Override defaults (this gives a warning)
aug1,aug2 = custom_augmentations(patch_size,kernel_size)task = MoCoTask(
model='resnet50',
weights=None,
in_channels=1,
version=2,
layers=2,
hidden_dim=4096,
output_dim=256,
lr=9.6,
memory_bank_size=65536,
size=patch_size,
augmentation1=aug1,
augmentation2=aug2,
)
trying to train
trainer = Trainer(
# accelerator=accelerator,
devices=[0],
callbacks=[checkpoint_callback, early_stopping_callback],
fast_dev_run=False,
log_every_n_steps=1,
logger=logger,
min_epochs=1,
max_epochs=50,
check_val_every_n_epoch=5000,
)datamodule = CustomDataModule(batch_size=64,num_workers=16, patch_size=patch_size)trainer.fit(model=task,datamodule=datamodule)
Version
0.5.0
—
Reply to this email directly, view it on GitHub
<#1622> or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIJUTWJGD54S2B5QVEKDKLX53NFNBFKMF2HI4TJMJ2XIZLTSOBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJLJONZXKZNENZQW2ZNLORUHEZLBMRPXI6LQMWBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTLDTOVRGUZLDORPXI6LQMWSUS43TOVS2M5DPOBUWG44SQKSHI6LQMWVHEZLQN5ZWS5DPOJ42K5TBNR2WLKJTGY4TIMRYHEZTLAVEOR4XAZNFNFZXG5LFUV3GC3DVMWVDCOJSHA3DKOBXGY42O5DSNFTWOZLSUZRXEZLBORSQ>
.
You are receiving this email because you are subscribed to this thread.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>
.
|
Without overriding the BaseTask init, when I modify the MoCo augmentations the model will not train because it is having issues logging the hyperparameters. This is the error that is returned:
To fix that temporarily, I modified the BaseTask class init so that the trainer = Trainer(
# accelerator=accelerator,
devices=[0],
callbacks=[checkpoint_callback, early_stopping_callback],
fast_dev_run=False,
log_every_n_steps=1,
logger=logger,
min_epochs=1,
max_epochs=50,
check_val_every_n_epoch=5000,
ignore_hyperparameters=['augmentation1','augmentation2'],
) |
Want to submit a PR adding your fix? I think that's actually the correct thing to do. We can add a unit test to make sure this doesn't break again. |
Description
Writing a custom set of augmentations and using those instead of the defaults causes issues with training.
I fixed this the hacky way temporarily by modifying the BaseTask init:
L34 ->
self.save_hyperparameters(ignore=['augmentation1','augmentation2'])
Steps to reproduce
Write custom set of augmentations
Override defaults (this gives a warning)
trying to train
Version
0.5.0
The text was updated successfully, but these errors were encountered: